646
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Two-sided bounds on the mean vector and covariance matrix in linear stochastically excited vibration systems with application of the differential calculus of norms

| (Reviewing Editor)
Article: 1021603 | Received 30 Sep 2014, Accepted 20 Nov 2014, Published online: 20 Mar 2015

Abstract

For a linear stochastic vibration model in state-space form, x˙(t)=Ax(t)+b(t),x(0)=x0, with system matrix A and white noise excitation b(t), under certain conditions, the solution x(t) is a random vector that can be completely described by its mean vector, mx(t):=mx(t), and its covariance matrix, Px(t):=Px(t). If matrix A is asymptotically stable, then mx(t)0(t) and Px(t)P(t), where P is a positive (semi-)definite matrix. As the main new points, in this paper, we derive two-sided bounds on mx(t)2 and Px(t)-P2 as well as formulas for the right norm derivatives D+kPx(t)-P2,k=0,1,2, and apply these results to the computation of the best constants in the two-sided bounds. The obtained results are of special interest to applied mathematicians and engineers.

AMS Subject Classifications:

Public Interest Statement

In recent years, the author has developed a differential calculus for norms of vector and matrix functions. More precisely, differentiability properties of these quantities were derived for various vector and matrix norms, and formulas for the pertinent (right-hand, resp. left-hand) derivatives were obtained. These results have been applied to a number of linear and non-linear problems by computing the best constants in two-sided bounds on the solution of the pertinent initial value problems. In the present paper, the application area is extended to stochastically excited vibration systems. Specifically, new two-sided estimates on the mean vector and the co-variance matrix are derived, and the optimal constants in these bounds are computed in a numerical example employing the differential calculus of norms.

1. Introduction

In this paper, linear stochastic vibration models of the form x˙(t)=Ax(t)+b(t),x(0)=x0, with real system matrix A and white noise excitation b(t) are investigated, in which the initial vector x0 can be completely characterized by its mean vector m0 and its covariance matrix P0. Likewise, the solution x(t), also called response, is a random vector that can be described by its mean vector mx(t):=mx(t), and its covariance matrix, Px(t):=Px(t). For asymptotically stable matrices A, it is known that mx(t)0(t) and Px(t)P(t), where P is a positive (semi-)definite matrix. This leads to the question of the asymptotic behavior of mx(t) and Px(t)-P. As appropriate norms for the investigation of this problem, the Euclidean norm for mx(t) and the spectral norm for Px(t)-P is the respective natural choice; both norms are denoted by ·2.

The main new points of the paper are

  • the determination of two-sided bounds on mx(t)2 and Px(t)-P2,

  • the derivation of formulas for the right norm derivatives D+kPx(t)-P2,k=0,1,2, and

  • the application of these results to the computation of the best constants in the two-sided bounds.

The paper is structured as follows.

In Section 2, the linear stochastically excited vibration model in state-space form is presented. Then, in Section 3, new two-sided bounds on mx(t)2 are determined. In Section 4, preliminary work for two-sided bounds on Px(t)-P2 is made that is employed in Section 5 to derive new two-sided bounds on Px(t)-P2 itself. In Section 6, the local regularity of Px(t)-P2 is studied. In Section 7, as the new result, formulas for the right norm derivatives D+kPx(t)-P2,k=0,1,2 are obtained. In Section 8, for the specified data in the stochastically exited model, the differential calculus of norms is applied to compute the best constants in the new two-sided bounds on mx(t)2 and Px(t)-P2. In Section 9, conclusions are drawn. Finally, in Appendix 1 , more details on some items are given.

2. The linear stochastically excited vibration system

In order to make the paper as far as possible self-contained, we summarize the known facts on linear stochastically excited systems. In the presentation, we follow closely the line of Müller (Citation1976, Sections 9.1 and 9.2).

So, let us depart from the deterministic model in state-space form(1) x˙(t)=Ax(t)+b(t),t0,x(0)=x0(1)

with system matrix ARn×n, the state vector x(t)Rn and the excitation vector b(t)Rn,t0.

Now, we replace the deterministic excitation b(t) by a stochastic excitation in the form of white noise. Thus, b(t) can be completely described by the mean vector mb(t) and the central correlation matrix Nb(t,τ) with(2) mb(t)=0Nb(t,τ)=Qδ(t-τ)(2)

where Q=Qb is the n×nintensity matrix of the excitation and δ(t-τ) the δ-function (more precisely, the δ-functional).

From the central correlation matrix, one obtains for τ=t the positive semi-definite covariance matrix(3) Pb(t):=Nb(t,t)(3)

At this point, we mention that the definition of a real positive semi-definite matrix includes its symmetry.

When the excitation is white noise, the deterministic initial value problem (1) can be formally maintained as the theory of linear stochastic differential equations shows. However, the initial state x0 must be introduced as Gaussian random vector,(4) x0(m0,P0)(4)

which is to be independent of the excitation (2); here, the sign means that the initial state x0 is completely described by its mean vector m0 and its covariance matrix P0. More precisely: x0 is a Gaussian random vector whose density function is completely determined by m0 and P0 alone.

The stochastic response of the system (1) is formally given by(5) x(t)=Φ(t)x0+0tΦ(t-τ)b(τ)dτ(5)

where besides the fundamental matrix Φ(t)=eAt and the initial vector x0- a stochastic integral occurs.

It can be shown that the stochastic response x(t) is a non-stationary Gauss-Markov process that can be described by the mean vector mx(t):=mx(t) and the correlation matrix Nx(t,τ):=N(x(t),x(τ)). For τ=t, we get the covariance matrix Px(t):=Px(t).

If the system is asymptotically stable, the properties of first and second order for the stochastic response x(t) we need are given by(6) mx(t)=Φ(t)m0,Px(t)=Φ(t)(P0-P)ΦT(t)+P(6)

where the positive semi-definite n×n matrix P satisfies the Lyapunov matrix equationAP+PAT+Q=0

This is a special case of the matrix equation AX+XB=C, whose solution can be obtained by a method of Ma (Citation1966). For the special case of diagonalizable matrices A and B, this is shortly described in Appendix A.1.

For asymptotically stable matrix A, one has limtΦ(t)=0 and thus from (6),(7) limtmx(t)=0(7)

and(8) limtPx(t)=P(8)

Therefore, it is of interest to investigate the asymptotic behavior of mx(t) and Px(t)-P. This investigation will be done in the next sections by giving two-sided bounds on both quantities in appropriate norms.

Even though the two-sided bounds on mx(t) can be obtained by just applying known estimates, they will be stated for the sake of completeness in Section 3.

As opposed to this, the determination of two-sided bounds on Px(t)-P leads to a new interesting problem and will be solved in two steps described in Sections 4 and 5.

3. Two-sided bounds on mx(t)

According to Equation (61), we havemx(t)=Φ(t)m0,t0

From Kohaupt (Citation2006, Theorem 8), one obtains two-sided bounds on mx(t).

To see this, let m00 and ·2 the Euclidean norm in Rn. Then, there exists a constant X0>0 and for every ε>0 an constant X1,ε>0 such that(9) X0eνm0[A]tmx(t)2X1,εe(νm0[A]+ε)t,t0(9)

where νm0[A] is the spectral abscissa of matrix A with respect to the vector m0 (see Kohaupt Citation2006, Section 7, p. 146). We mention that often νm0[A]=ν[A], cf. (Kohaupt, Citation2006, p. 154).

4. Preliminary work for two-sided bounds on Px(t)-P

In this section, we derive two-sided bounds that are of general interest beyond their application in Section 5. Therefore, more general assumptions than needed there will be made. We obtain the following lemma.

Lemma 1

(Two-sided bounds on ΨCΨ2)Let CCn×n with C=C, where C is the adjoint of C. Further, let ·2 be the spectral norm of a matrix.

Then, the two-sided bound(10) c0Ψ22ΨCΨ2c1Ψ22,ΨCn×m(10)

is valid where(11) c0=infv2=1|(Cv,v)|(11) (12) c1=supv2=1|(Cv,v)|(12)

Proof

(i) Decisive tool is the fact that for ACn×n with A=A one has the two representationsA2=supv2=1Av2=supv2=1|(Av,v)|

In the following, this will be applied to ΨCΨ.

(ii) Lower bound:

One hasΨCΨ2=supu2=1|(ΨCΨu,u)|=supu2=1|(CΨu,Ψu)|=supu2=1Ψu0|(CΨuΨu2,ΨuΨu2)|Ψu22supu2=1Ψu0infv2=1|(Cv,v)|Ψu22=infv2=1|(Cv,v)|supu2=1Ψu22=c0Ψ22,Ψ0

For Ψ=0, this upper bound remains valid.

(iii) Upper bound

Similarly, one obtainsΨCΨ2c1Ψ22,Ψ0

For Ψ=0, this lower bound remains valid.

Remark

In Lemma 1, it is known that(13) c1=supv2=1|(Cv,v)|=supv0Cv2v2=supv2=1Cv2=C2(13)

where C2>0 for C0.

Similarly, one can derive a chain of relations for c0=infv2=1|(Cv,v)|, as the next lemma shows.

Lemma 2

(Chain of relations for c0=infv2=1|(Cv,v)|)

Let CCn×n with C=C.

Then,(14) c0=infv2=1|(Cv,v)|=infv0Cv2v2=infv2=1Cv2(14)

Now, let C be additionally regular, let · denote any vector norm as well as the associated sub norm.

Then,(15) infv=1Cv=1C-1>0(15)

this is especially valid also for the spectral norm ·2.

The proof will be given in Appendix A.2.

5. Two-sided bounds on Px(t)-P=Φ(t)(P0-P)ΦT(t)

In this section, the results of Section 4 are employed to estimate Px(t)-P2 above and below by Φ(t)22 as well as by e2(ν[A]+ε)t resp. e2ν[A]t where ν[A] is the spectral abscissa of matrix A. New will be the quadratic asymptotic behavior of Px(t)-P2.

We show the following lemma.

Theorem 3

(Two-sided bounds on Px(t)-P=Φ(t)(P0-P)ΦT(t) based on Φ(t)22)

Let ARn×n, let Φ(t)=eAt be the associated fundamental matrix with Φ(0)=E where E is the identity matrix. Further, let P0,PRn×n be the covariance matrices from Section 2.

Then,(16) q0Φ(t)22Px(t)-P2q1Φ(t)22,t0(16)

where(17) q0=infv2=1|((P0-P)v,v)|(17)

and(18) q1=supv2=1|((P0-P)v,v)|=P0-P2(18)

If P0P, then q1>0. If P0-P is regular, then(19) q0=(P0-P))-12-1>0(19)

Proof

We obtain Theorem 3 by applying Lemmas 1 and 2 with Ψ=Φ(t)=ΦT(t)=eATt, Ψ(t)=Φ(t)=eAt and C=P0-P.

Further, two-sided bounds can be derived by using Kohaupt (Citation2006, Theorem 8). Thus, there is a constant φ0>0 and for every ε>0 a constant φ1,ε>0 such that(20) φ0eν[A]tΦ(t)2φ1,εe(ν[A]+ε)t,t0(20)

This leads to the following corollary.

Corollary 4

(Two-sided bounds on Px(t)-P=Φ(t)(P0-P)ΦT(t) based on ν[A])

Let ARn×n, let Φ(t)=eAt be the associated fundamental matrix with Φ(0)=E where E is the identity matrix. Further, let P0,PRn×n be the covariance matrices from Section 2.

Then, there exists a constant p00 and for every ε>0 a constant p1(ε)0 such that(21) p0e2ν[A]tPx(t)-P2p1(ε)e2(ν[A]+ε)t,t0.(21)

If P0P, then p1(ε)>0. If P0-P is regular, then also p0>0.

Remark

Due to the equivalence of norms in finite-dimensional spaces, corresponding bounds as in Theorem 3 and Corollary 4 are valid also in all other (not necessarily multiplicative) matrix norms. Of course, besides the spectral norm ·2, also the Frobenius norm ·F=|·|2 (cf. Kohaupt, bib15) is of special interest in the context of stochastically excited systems.

6. Local regularity of the function Px(t)-P2

We have the following lemma which states - loosely speaking - that for every t00, the function tΔPx(t)2:=Px(t)-P2:=Φ(t)(P0-P)ΦT(t)2 is real analytic in some right neighborhood [t0,t0+Δt0].

Lemma 5

(Real analyticity of tPx(t)-P2 on [t0,t0+Δt0])

Let t0R0+. Then, there exists a number Δt0>0 and a function tΔPx^(t), which is real analytic on [t0,t0+Δt0] such that ΔPx^(t)=ΔPx(t)2=Px(t)-P2=Φ(t)(P0-P)ΦT(t)2,t[t0,t0+Δt0].

Proof

Based on ΔPx(t)2=max{|λmax(ΔPx(t))|,|λmin(ΔPx(t))}|, the proof is similar to that of Kohaupt (Citation2001, Lemma 1). The details are left to the reader.

7. Formulas for the norm derivatives D+kPx(t)-P2,k=0,1,2

In this section, in a first step, for complex matrices A and C with C=C, we define a matrix function Ψ(t):=Φ(t)CΦ(t) and derive formulas for the right norm derivatives D+kΨ(t)2 based on the representation Ψ(t)2=supu2=1|(Ψu,u)| instead of Ψ(t)2=supu2=1Ψu2. Even though the last one is also valid for CC, the first one leads to simpler formulas. In a second step, the obtained formulas are employed for C=P0-PRn×n and ARn×n to deliver the formulas for D+kPx(t)-P2,k=0,1,2.

Let CCn×n with C=C. Then, the eigenvalues λj(C),j=1,,n of C are real, and for the spectral norm of C, one has the formulaC2=maxu2=1|(Cu,u)|=maxj=1,,n|λj(C)|

and thusC2=max{|λmax(C)|,|λmin(C)|}

Now, let ACn×n, Φ(t)=eAt its fundamental matrix, and defineΨ(t):=Φ(t)CΦ(t),t0

Then, Ψ(t)Cn×n with Ψ(t)=Ψ(t) and thusΨ(t)2=max{|λmax(Ψ(t))|,|λmin(Ψ(t))|},t0

cf. (Achieser & Glasman, Citation1968, Chapter II.2, p. 62) or (Kantorowitsch & Akilow, Citation1964, p. 255).

We mention that without C=C, one would have the formulaΨ(t)2=λmax(Ψ(t)Ψ(t)),t0

Of course, this formula remains valid for C=C, but is more complicated and probably numerically less good than the first representation of Ψ(t)2. The computation of D+kΨ(t)2 by the last formula would be similar as in Kohaupt (Citation2001) for D+kΦ(t)2.

In order to get a formula for D+kΨ(t)2 in terms of the given matrices A and C, at the beginning, we follow a similar line as in Kohaupt (Citation2001, Section 3, pp. 6–7), however.

Starting point is the series expansionΨ(t):=Φ(t)CΦ(t)=j=0Φ(t0)BjΦ(t0)(t-t0)jj!

withBj=k=0jjkAj-kCAk,j=0,1,2,. Thus, e.g.B0=CB1=AC+CAB2=A2C+2ACA+CA2

Consequently,Ψ(t)=T(0)+T(1)(t-t0)+T(2)(t-t0)2+

withT(0)=Φ(t0)B0Φ(t0)T(1)=Φ(t0)B1Φ(t0)T(2)=Φ(t0)(12B2)Φ(t0)

Then, due to Kato (Citation1966, Theorem 5.11, Chapter II, pp. 115–116) and Kohaupt (Citation1999, Lemma 2.1),λmax(Ψ(t))=ν0,max+ν1,max(t-t0)+ν2,max(t-t0)2+,t0tt0+Δt0

where the quantities νj,max,j=0,1,2 are given by the formulas for νj,j=0,1,2 in Kohaupt (Citation2001, pp. 6–7) with the operators T(0),T(1),T(2) defined above. This is shortly recapitulated in Appendix A.3.

The series expansionλmin(Ψ(t))=ν0,min+ν1,min(t-t0)+ν2,min(t-t0)2+,t0tt0+Δt0

is obtained via the formulaλmin(Ψ(t))=-λmax(-Ψ(t))

Now, defines(t):=s1(t)s2(t):=λmax(Ψ(t))λmin(Ψ(t))

Then,s(t)=s(t0)+Ds(t0)(t-t0)+D2s(t0)(t-t0)22+,t0tt0+Δt0

withs(t0):=ν0,maxν0,minDs(t0):=ν1,maxν1,minD2s(t0):=2ν2,max2ν2,min

andΨ(t0)2=s(t0)

Hence, for fixed t0R0+,D+kΨ(t0)2=D+ks(t0),k=0,1,2

where the norm derivatives D+ks(t0),k=0,1,2 are obtained by the formulas of Kohaupt (Citation2002, Theorem 6). This is shortly recapitulated in Appendix A.4.

If one replaces t0 by t, then one gets the functions tD+kΨ(t)2=D+ks(t),k=0,1,2.

The norm derivatives D+kPx(t)-P2,k=0,1,2 are obtained as the special case CRn×n with C=P0-P and ARn×n.

These formulas are needed in Section 8.

8. Applications

In this section, we apply the new two-sided bounds on Px(t)-P2 obtained in Section 5 as well as the differential calculus of norms developed in Sections 6 and 7 to a linear stochastic vibration model with asymptotically stable system matrix and white noise excitation vector.

In Section 8.1, the stochastic vibration model as well as its state-space form is given, and in Section 8.2 the data are chosen. In Section 8.3, computations with the specified data are carried out, such as the computation of P and P0-P as well as the computation of the curves y=D+kPx(t)-P2,k=0,1,2 and of the curve y=Px(t)-P2 along with its best upper and lower bounds. In Section 8.4, computational aspects are shortly discussed.

8.1. The stochastic vibration model and its state-space form

Consider the multi-mass vibration model in Figure .

Figure 1. Multi-mass vibration model.

Figure 1. Multi-mass vibration model.

The associated initial-value problem is given byMy¨+By˙+Ky=f(t),y(0)=y0,y˙(0)=y˙0

where y=[y1,,yn]T and f(t)=[f1(t),,fn(t)]T as well asM=m1m2m3mnB=b1+b2-b2-b2b2+b3-b3-b3b3+b4-b4-bn-1bn-1+bn-bn-bnbn+bn+1K=k1+k2-k2-k2k2+k3-k3-k3k3+k4-k4-kn-1kn-1+kn-kn-knkn+kn+1

Here, y is the displacement vector, f(t) the applied force, and M, B, and K are the mass, damping, and stiffness matrices, as the case may be. Matrix M is regular.

In the state-space description, one obtainsx˙(t)=Ax(t)+b(t),x(0)=x0

with x=[yT,zT]T,z=y˙, and x0=[y0T,z0T]T,z0=y˙0,

where the initial vector x0=[y0T,z0T]T is characterized by the mean vector m0 and the covariance matrix P0.

The system matrix A and the excitation vector b(t) are given by respectively. The vector x(t) is called state vector.

The (symmetric positive semi-definite) intensity matrix Q=Qb is obtained from the (symmetric positive semi-definite) intensity matrix Qf by (see Müller, Citation1976, Formulas (9.65)) and the derivation of this relation in Appendix A.5 .

8.2. Data for the model

As of now, we specify the values asmj=1,j=1,,nkj=1,j=1,,n+1andbj=1/2,jeven1/4,jodd

Then,M=EB=34-12-1234-14-1434-12-1434-12-1234(if n is even), andK=2-1-12-1-12-1-12-1-12

We choose n=5 in this paper so that the state-space vector x(t) has the dimension m=2n=10.

Remark

In Sections 27, we have denoted the dimension of x(t) by n. From the context, the actual dimension should be clear.

For m0, we takem0=[my0T,mz0T]T

withmy0=[-1,1,-1,1,-1]T

andmz0=0,0,0,0,0,0T(CaseI)-1,-1,-1,-1,-1T(CaseII)

similarly as in Kohaupt (Citation2002) for y0 and y˙0. For the 10×10 matrix P0, we chooseP0=0.01E

The white-noise force vector f(t) is specified asf(t)=0,,0;fn(t)T

so that its intensity matrix QfRn×n with qf,nn=:q has the form We chooseq=0.01

With M=E, this leads to (see Appendix A.5) In the Lyapunov equation BX+XA=C of Section 2, we employ the replacementsBA,AAT,C-Q

to obtain the limiting covariance matrix X=P=limtPx(t).

8.3. Computations with the specified data

(i) Bounds on y=Φ(t)m0 in the vector norm ·2

Upper bounds on y=Φ(t)m0 in the vector norm ·2 for the two cases (I) and (II) of m0 are already given in Kohaupt (Citation2002, Figures 2 and 3). There, we had a deterministic problem with f(t)=0 and the solution vector x(t)=Φ(t)x0, where x0 there had the same data as m0 here. We mention that for the specified data, νm0[A]=ν[A] in both cases (cf. Kohaupt, Citation2006, p. 154) for a method to prove this. For the sake of brevity, we do not compute or plot the lower bounds and thus the two-sided bounds, but leave this to the reader.

(ii) Computation of P and P0-P as well as of their associated eigenproblems

With the data of Section 2, we obtainP=0.16240.24970.23470.16450.0809-0.0000-0.0084-0.0249-0.0312-0.00580.24970.40340.41290.31340.15250.0084-0.0000-0.0421-0.0701-0.02600.23470.41290.49770.44530.23050.02490.04210.0000-0.0706-0.08210.16450.31340.44530.48260.32050.03120.07010.0706-0.0000-0.14260.08090.15250.23050.32050.42490.00580.02600.08210.1426-0.00000.00000.00840.02490.03120.00580.07930.10230.05410.00400.0007-0.00840.00000.04210.07010.02600.10230.15060.11240.0363-0.0103-0.0249-0.0421-0.00000.07060.08210.05410.11240.16210.1300-0.0282-0.0312-0.0701-0.0706-0.00000.14260.00400.03630.13000.19980.0514-0.0058-0.0260-0.0821-0.14260.00000.0007-0.0103-0.02820.05140.4937The column vector of eigenvalues ΛP and the modal matrix XP, that is, the matrix whose columns are made up of the eigenvectors, are computed asΛP=1.546872608659850.553907447778840.506693392703020.262822283871530.121878769422460.056582350346950.006255739522180.001604973614290.000017279720120.00002903815291andXP=-0.2584-0.1685-0.1691-0.0346-0.45630.1911-0.42450.11020.60680.2642-0.4509-0.2648-0.2304-0.0776-0.39780.20370.00090.0373-0.6809-0.0524-0.5517-0.1522-0.0849-0.13480.0842-0.24200.4392-0.41020.3492-0.3135-0.53210.13480.1400-0.00650.3553-0.3934-0.10930.5373-0.06620.3050-0.34650.5405-0.03880.47070.21110.4732-0.1727-0.2410-0.0095-0.0904-0.02950.10430.0511-0.45470.12320.49450.15210.51480.1529-0.4557-0.05370.22920.1198-0.63200.11070.23260.0712-0.3678-0.08060.5636-0.03540.41440.1863-0.3279-0.2773-0.3810-0.5002-0.1605-0.0923-0.42230.02170.52630.01160.1227-0.5647-0.11680.55170.20680.05810.15580.13010.2364-0.9155-0.15590.1798-0.1653-0.04890.0513-0.00290.0083showing that P is positive definite. Correspondingly,ΛP0-P=-0.546872608659850.446092552221150.493306607296980.737177716128470.878121230577540.943417649653050.993744260477820.998395026385710.999982720279880.99997096184709andXP0-P=0.25840.16850.16910.03460.4563-0.19110.42450.1102-0.60680.26420.45090.26480.23040.07760.3978-0.2037-0.00090.03730.6809-0.05240.55170.15220.08490.1348-0.08420.2420-0.4392-0.4102-0.3492-0.31350.5321-0.1348-0.14000.0065-0.35530.39340.10930.53730.06620.30500.3465-0.54050.0388-0.4707-0.2111-0.47320.1727-0.24100.0095-0.09040.0295-0.1043-0.05110.4547-0.1232-0.4945-0.15210.5148-0.1529-0.45570.0537-0.2292-0.11980.6320-0.1107-0.2326-0.0712-0.36780.08060.56360.0354-0.4144-0.18630.32790.27730.38100.5002-0.16050.0923-0.4223-0.0217-0.5263-0.0116-0.12270.56470.1168-0.55170.2068-0.05810.1558-0.1301-0.23640.91550.1559-0.17980.16530.04890.05130.00290.0083showing that P0-P is symmetric and regular (but not positive definite). Matrix P0-P is needed to compute the curve y=Px(t)-P2=Φ(t)(P0-P)ΦT(t)2.

(iii) Computation of the curves y=D+kPx(t)-P2=D+kΦ(t)(P0-P)ΦT(t)2, k=0,1,2

The computation of y=D+kPx(t)-P2, k=0,1,2 for the given data is done according to Section 7 with C=P0-P. The pertinent curves are illustrated in Figures . By inspection, there are no kinks (like in the curve y=t at t=0) so that D+kPx(t)-P2=DkPx(t)-P2=dkdtkPx(t)-P2, k=0,1,2. For some details on the computation of D+kPx(t)-P2,k=1,2, see Appendix A.6

We have checked the results numerically by difference quotients. More precisely, settingΔPx(t):=Px(t)-P,t>0

andg(t):=ΔPx(t)2=Px(t)-P2,t>0

we have investigated the approximationsδhg(t):=g(t+h)-g(t-h)2hDg(t),t-h0

andδh22g(t):=δh2(δh2g(t))=g(t+h)-2g(t)+g(t-h)h2D2g(t),t-h0

as well asδhDg(t):=Dg(t+h)-Dg(t-h)2hD2g(t),t-h0

Fort=2.5h=10-5

we obtainDg(t)=DPx(t)-P2=0.00304667381090δhg(t)=δhPx(t)-P2=0.00304667381375

as well asD2g(t)=D2Px(t)-P2=-0.00667087489483δh22g(t)=δh22Px(t)-P2=-0.00667393224019

andD2g(t)=D2Px(t)-P2=-0.00667087489483δhDg(t)=δhDPx(t)-P2=-0.00667087489555

so that the computational results for y=D+kPx(t)-P2=DkPx(t)-P2=dkdtkPx(t)-P2, k=0,1,2 with t=2.5 are well underpinned by the difference quotients. As we see, the approximation of D2g(t)=D2Px(t)-P2 by δhDg(t) is much better than by δh22g(t), which was to be expected, of course.

Figure 2. Curve y=Px(t)-P2,0t25,Δt=0.1.

Figure 2. Curve y=‖Px(t)-P‖2,0≤t≤25,Δt=0.1.

Figure 3. Right norm derivative y=D+Px(t)-P2,0t25,Δt=0.1.

Figure 3. Right norm derivative y=D+‖Px(t)-P‖2,0≤t≤25,Δt=0.1.

Figure 4. Second right norm derivative y=D+2Px(t)-P2,0t25,Δt=0.1.

Figure 4. Second right norm derivative y=D+2‖Px(t)-P‖2,0≤t≤25,Δt=0.1.

(iv) Bounds on y=Px(t)-P=Φ(t)(P0-P)ΦT(t) in the spectral norm ·2

Let α:=ν[A] be the spectral abscissa of the system matrix A. With the given data, we obtainα:=ν[A]=-0.05023936121946

so that the system matrix A is asymptotically stable.

The upper bound on y=Px(t)-P2=Φ(t)(P0-P)ΦT(t)2 is given by y=p1(ε)e2(α+ε)t, t0. Here, ε=0 can be chosen since matrix A is diagonalizable. But, in the programs, we have chosen the machine precision ε=eps=2-522.2204×10-16 of MATLAB in order not to be bothered by this question.

With φ1,ε(t):=p1(ε)e2(α+ε)t,t0, the optimal constant p1(ε) in the upper bound is obtained by the two conditionsPx(tc)-P2=φ1,ε(tc)=p1(ε)e2(α+ε)tcD+Px(tc)-P2=φ1,ε(tc)=2(α+ε)φ1,ε(tc)

where tc is the place of contact between the curves.

This is a system of two non-linear equations in the two unknowns tc and p1(ε). By eliminating φ1,ε(tc), this system is reduced to the determination of the zero ofD+Px(tc)-P2-2(α+ε)Px(tc)-P2=0

which is a single non-linear equation in the single unknown tc. For this, MATLAB routine fsolve was used.

After tc has been computed from the above equation, the best constant p1(ε) is obtained fromp1(ε)=Px(tc)-P2e-2(α+ε)tc

From the initial guess tc,0=3.0, the computations deliver the valuestc=3.14231573176783p1(ε)=0.02288922631729

In a similar way, in the lower bound y=p0e2αt, we compute the best constant p0 and the place of contact ts. For the initial guess ts,0=6.0, the results arets=6.20977038583445p0=0.00600530767800

The curve y=Px(tc)-P2 along with the best upper and lower bounds is illustrated in Figure .

Figure 5. y=Px(t)-P2 along with the best upper and lower bounds.

Figure 5. y=‖Px(t)-P‖2 along with the best upper and lower bounds.

(v) Applicability of the second norm derivative D+2Px(t)-P2

The first norm derivative D+Px(t)-P2 was employed in Point (iv). Apart from this, it can be applied to determine the relative extrema of the curve y=Px(tc)-P2.

The second norm derivative D+2Px(t)-P2 can be used to compute the inflexion points. The details are left to the reader.

8.4. Computational aspects

In this subsection, we say something about the computer equipment and the computation time for some operations.

(i) As to the computer equipment, the following hardware was available: an Intel Pentium D(3.20 GHz, 800 MHz Front-Side-Bus, 2x2MB DDR2-SDRAM with 533 MHz high-speed memories). As software package, we used MATLAB, Version 6.5.

(ii) The computation time t of an operation was determined by the command sequence ti=clock;operation;t=etime(clock,ti); it is put out in seconds rounded to two decimal places by Matlab. For the computation of the eigenvalues of matrix A, we used the command [XA,DA]=eig(A); the pertinent computation time is less than 0.01s. To determine Φ(t)=eAt, we employed Matlab routine expm. For the computation of the 251 values t,y,yu,yl in Figure , it took t(Table for Figure )=0.83 s. Here, t stands for the time value running from t0=0 to te=25 with stepsize Δt=0.1; y stands for the value of Px(t)-P2, yu for the value of the best upper bound p1(ε)e2(α+ε)t and yl for the value of the best lower bound p0e2αt.

9. Conclusion

In the present paper, linear stochastic vibration systems of the form x˙(t)=Ax(t)+b(t),x(0)=x0, are investigated driven by white noise b(t). If the system matrix A is asymptotically stable, then the mean vector mx(t) and the covariance matrix Px(t) both converge with mx(t)0(t) and Px(t)P(t) for some symmetric positive (semi-)definite matrix P. This raises the question of the asymptotic behavior of both quantities. The pertinent investigations are made in the Euclidean norm ·2 for mx(t) and in the spectral norm, also denoted by ·2, for Px(t)-P. The main new points are the derivation of two-sided bounds on both quantities, the derivation of the right norm derivatives D+kPx(t)-P2,k=0,1,2 and, as application, the computation of the best constants in the bounds. Since we have used a new way to determine the norm derivatives D+kPx(t)-P2,k=0,1,2, we have checked the obtained formulas by various difference quotients. They underpin the correctness of the numerical values for the specified data.

It is reminded that the original system consists of a multi-mass vibration model with damping and white noise force excitation. By a standard method, it is cast into state-space form.

As illustration of the results, the curves y=D+kPx(t)-P2,k=0,1,2 are plotted as well as the curve y=Px(t)-P2 together with the best two-sided bounds.

The computation time to generate the last figure with a 10×10 matrix A is less than a second. Of course, in engineering practice, much larger models occur. As in earlier papers, we mention that in this case engineers usually employ a method called condensation to reduce the size of the matrices.

We have added an Appendix to exhibit more details on some items in order to make the paper easier to comprehend.

The numerical values were given in order that the reader can check the results.

Altogether, the results of the paper should be of interest to applied mathematicians and particularly to engineers.

Additional information

Funding

The authors received no direct funding for this research.

Notes on contributors

Ludwig Kohaupt

Ludwig Kohaupt received the equivalent to the Master Degree (Diplom-Mathematiker) in Mathematics in 1971 and the equivalent to the PhD (Dr phil nat) in 1973 from the University of Frankfurt/Main.

From 1974 until 1979, he was a teacher in Mathematics and Physics at a Secondary School. During that time (from 1977 until 1979), he was also an auditor at the Technical University of Darmstadt in Engineering Subjects, such as Mechanics, and especially Dynamics.

From 1979 until 1990, he joined the Mercedes-Benz car company in Stuttgart as a Computational Engineer, where he worked in areas such as Dynamics (vibration of car models), Cam Design, Gearing, and Engine Design. Some of the results were published in scientific journals (on the whole, 12 papers and 1 monograph).

Then, in 1990, he combined his preceding experiences by taking over a professorship at the Beuth University of Technology Berlin. He retired on 01 April 2014.

References

  • Achieser, N. I., & Glasman, I. M. (1968). Theorie der linearen Operatoren im Hilbert-Raum [Theory of linear operators in Hilbert space]. Berlin: Akademie-Verlag.
  • Heuser, H. (1975). Funktionalanalysis [Functional analysis]. Stuttgart: B.G. Teubner.
  • Kantorowitsch, L. W., & Akilow, G. P. (1964). Funktionalanalysis in normierten Räumen [Functional analysis in normed linear spaces]. Berlin: Akademie-Verlag. (German translation of the Russian Original).
  • Kato, T. (1966). Perturbation theory for linear operators. New York: Springer.
  • Kohaupt, L. (1999). Second logarithmic derivative of a complex matrix in the Chebyshev norm. SIAM Journal on Matrix Analysis and Applications, 21, 382–389.
  • Kohaupt, L. (2001). Differential calculus for some p-norms of the fundamental matrix with applications. Journal of Computational and Applied Mathematics, 135, 1–21.
  • Kohaupt, L. (2002). Differential calculus for p-norms of complex-valued vector functions with applications. Journal of Computational and Applied Mathematics, 145, 425–457.
  • Kohaupt, L. (2006). Computation of optimal two-sided bounds for the asymptotic behavior of free linear dynamical systems with application of the differential calculus of norms. Journal of Computational Mathematics and Optimization, 2, 127–173.
  • Ma, E.-Ch. (1966). A finite series solution of the matrix equation AX ₋ XB ₌ C. SIAM Journal on Applied Mathematics, 14, 490–495.
  • Müller, P. C., & Schiehlen, W. O. (1976). Lineare Schwingungen [Linear Vibrations]. Wiesbaden: Akademische Verlagsgesellschaft.
  • Niemeyer, H., & Wermuth, E. (1987). Lineare algebra [Linear algebra]. Braunschweig Wiesbaden: Vieweg.
  • Stummel, F., & Hainer, K. (1982). Praktische Mathematik [Introduction to numerical analysis]. Stuttgart: B.G. Teubner.
  • Taylor, A. E. (1958). Introduction to functional analysis. New York, NY: Wiley.

Appendix 1

In this Appendix, we show more details on some items in order to make this paper more easily understandable especially for engineers and generally for a broader readership.

Solution of the Lyapunov matrix equation BX+XA=C by a method of Ma

In this section, we restrict ourselves to the case of diagonalizable matrices B and A since we need only this case in Section 8. The treatment of the general case can be found in Ma (Citation1966).

Let ACn×n, BCm×m, and CCm×n. The problem is to find the solution matrix XCm×n such thatBX+XA=C

We suppose that matrices A and B both be diagonalizable and that the eigenvalues λi(A),i=1,,n and μj(B),j=1,,msatisfy the conditionμj(B)+λk(A)0;k=1,,n;j=1,,m

Then, the solution of the equation BX+XA=C can be obtained as follows.

Since A and B are diagonalizable, there exist regular matrices UCn×n and VCm×m such thatA~=U-1AU,B~=V-1BV

whereA~=diag(λk(A))k=1,,n,B~=diag(μj(B))j=1,,m

DefineX~=V-1XU,C~=V-1CU

Then, BX+XA=C can be written as(V-1BV)(V-1XU)+(V-1XU)(U-1AU)=V-1CU

orB~X~+X~B~=C~

Its solution X~=(X~jk) is given byX~jk=C~jkμj+λk,j=1,,m;k=1,,n

From this, we obtain the solution of the original matrix equation BX+XA=C by the relationX=VX~U-1

Remarks

(i)

The solution X could depend on the transformation matrices U and V. But, it is actually independent of these matrices since the mapping L(X):=BX+XA:Cm×nCm×n is injective. Therefore, the solution X is uniquely determined.

(ii)

If B=A and C=C, then X=X and X is even positive semi-definite. The proof is left to the reader.

(iii)

If ARn×n and B=AT as well as CRn×n with C=CT, then X=XT and X is positive semi-definite.

Proof of Lemma 2

In this section, we follow closely the line of Heuser to carry over the proof of supv2=1|(Cv,v)|=supv2=1Cv2=C2 to derive a corresponding relation for infv2=1|(Cv,v)|2 for CCn×n with C=C (see Heuser, Citation1975, pp. 277–278 resp. Theorem 6.8.5).

So, let ·=·2 be the common vector norm resp. the spectral matrix norm.

Let λ>0 be chosen arbitrarily. Then,4Cx22=(C(λx+1λx),λx+1λx)-(C(λx-1λx),λx-1λx),xCn

which is proven by simplifying the right member of the equation. This entails4Cx2|(C(λx+1λCx),λx+1λCx)|+|(C(λx-1λCx),λx-1λCx)|=|(C(λx+1λCx),λx+1λCx)|λx+1λCx2λx+1λCx2+|(C(λx-1λCx),λx-1λCx)|λx-1λCx2λx-1λCx2

if λx+1λx0 and λx-1λx0, x=1. Thus,4infx=1Cx2|(Cy,y)|(y,y)λx+1λx2+|(Cz,z)|(z,z)λx-1λx2

with y=λx+1λx and z=λx-1λx, if y0 and z0.

Using the parallelogram identity, we obtain4infx=1Cx2infy=1|(Cy,y)|(y,y)λx+1λCx2+infz=1|(Cz,z)|(z,z)λx-1λCx2=infy=1|(Cy,y)|(y,y)λx+1λCx2+λx-1λCx2=infy=1|(Cy,y)|(y,y)2λCx2+21λCx2=infy=1|(Cy,y)|(y,y)2λ2x2+2Cx2λ2

Let λ2=Cxx=Cx and x=1. Then,4infx=1Cx2infy=1|(Cy,y)|(y,y)2Cx+2Cx

orinfx=1Cx2infy=1|(Cy,y)|(y,y)infx=1Cx

Now,infx=1Cx2=[infx=1Cx]2

This leads to if λx+1λx0 and λx-1λx0, x=1.

Special cases (S1)–(S3):

(S1) This implies -1λCx=λx. Therefore,4Cx2|(C(λx+1λx),λx+1λCx)|=0+|(C(λx-1λx),λx-1λCx)|=|(C(λx-1λx),λx-1λCx)|=|(C(2λx),2λx)|=4λ2|(Cx,x)|,x=1

Letλ2=Cxx=Cx,x=1

Then,4Cx24Cx|(Cx,x)|

and thereforeCx|(Cx,x)|,x=1

Thus,infy=1Cy|(Cx,x)|,x=1

so thatinfy=1Cyinfx=1|(Cx,x)|=infx0|(Cx,x)|x2

Thus, relation (22) is also proven in the special case (S1).

(S2) This case is treated similarly as (S1).

(S3) This leads to4Cx22=(C(λx+1λCx),λx+1λCx)-(C(λx-1λCx),λx-1λCx)=0

for an x with x=1. Therefore, infx=1Cx=0 so that inequality (22) is also valid in the special case (S3).

Relation (22) with instead of is trivial. On the whole, the chain of Equation (14) is proven.

Now, let CCn×nbe regular, let · denote any vector norm and the associated sub matrix norm. Because for the range of C one has R(C)=Cn, theninfx0Cxx=infx0CxC-1Cx=infx01C-1CxCx=1supx0C-1CxCx=1supy0C-1yy=1C-1

thus showing relation (15). On the whole, the proof of Lemma 2 is completed.

Remark

We have seen that the method described by Heuser (Citation1975) to derive the relationsupx=1|(Cx,x)|=supx=1Cx

for CCn×n with C=C can be carried over to prove the relationinfx=1|(Cx,x)|=infx=1Cx

As opposed to this, using Taylor’s method in Taylor (Citation1958, pp. 322–323), it seems to be impossible to prove the inf relations in a similar way as the sup relations.

Since Heuser’s book is written in German, we think that it is worthwhile to make Heuser’s proof idea accessible to a broad readership. Of course, the author cannot rule out that the above inf formulas have been derived before. But, he has not found such a derivation in literature.

Series expansion of λmax(Ψ(t)),t0tt0+Δt0

We determine the coefficients νj,j=0,1,2 in the series expansion of Section 7, where we have set νj:=νj,max,j=0,1,2. The derivation follows a line similar to that of Kohaupt (Citation2001, pp. 6–7). We note that the operators Ψ(t) and T(0), T(1), T(2) are defined in a way different from that in Kohaupt (Citation2001), however. This is the first difference.

Let λmax(Ψ(t)) be the largest eigenvalue of Ψ(t). Then, due to

Kato (Citation1966, Theorem 5.11, Chapter II, pp. 115–116] and Kohaupt (Citation1999, Lemma 2.1)λmax(Ψ(t))=ν0+ν1(t-t0)+ν2(t-t0)2+,t0tt0+Δt0

where the quantities ν0, ν1, and ν2 are derived now.

Let n-1:=n and νk(0)[T(0)],k=1,,n-1 be the eigenvalues of T(0). Then,ν0=maxk=1,,n-1νk(0)[T(0)]

Further, defineM-1:=X:=Cn

LetV0:=[v1(ν0),,vn0(ν0)]

be the matrix formed by the orthonormal set of eigenvectors vk(ν0),k=1,,n0 associated with ν0, and let Pν0 be the orthogonal projection on the algebraic eigenspaceM0:=span{v1(ν0),,vn0(ν0)}(which is here identical with the geometric eigenspace). Then, M0=Pν0X with dimM0=n0. Further, Pν0 can be calculated byPν0=V0V0(cf. Niemeyer & Wermuth, Citation1987, pp. 234–238). LetT~(1):=Pν0T(1)Pν0

andνk(1)[T~(1)],k=1,,n0

be the eigenvalues of T~(1). Then,ν1:=maxk=1,,n0{νk(1)[T~(1)]|theassociatedeigenvectorliesinM0}

LetV1:=[v1(ν1),,vn1(ν1)]

be the matrix formed by the orthonormal set of eigenvectors vk(ν1),k=1,,n1 associated with ν1, and let Pν1 be the orthogonal projection on the algebraic eigenspaceM1:=span{v1(ν1),,vn1(ν1)}

Then, M1=Pν1X with dimM1=n1. As above, Pν1 can be calculated byPν1=V1V1

LetT^(2):=Pν1T~(2)Pν1:=Pν1(T(2)-T(1)Sν0T(1))Pν1

withSν0:=νk(0)ν01νk(0)-ν0Pνk(0)(for Sν0 cf. Kato, Citation1966, p. 40, Problem 5.10, Formula (5.32)) and for T~(2) (cf. Kato, Citation1966, p. 116). Letνk(2)[T^(2)],k=1,,n1

be the eigenvalues of T^(2). Then,ν2:=maxk=1,,n1{νk(2)[T^(2)]|theassociatedeigenvectorliesinM1}

Remark

In the formula for ν1, exactly those eigenvectors v(1) lie in M0 for which rankPν0=rank[Pν0,v(1)]. (In Kohaupt (Citation2001, p. 7, Remark), instead of Pν0 it reads Pν1 there, which is a typo.) Similarly one proceeds in the formula for ν2.

The second difference to Kohaupt (Citation2001) is that the formulas of Kohaupt (Citation2001, Theorem 4) are not applied. Instead, one may use the relations D+kΨ(t0)2=D+ks(t0),k=0,1,2.

Formulas for D+ks(t0),k=0,1,2

For these formulas, we refer to Kohaupt (Citation2002, pp. 433–434). Let sCm(R0+,Rn) and s(t)=[s1(t),,sn(t)]T, and define the following sign functionals for i{1,,n}:si(0):=sgn[si(t0)]si(1):=sgn[si(t0)],si(t0)0sgn[Dsi(t0)],si(t0)=0si(m):=sgn[si(t0)],si(t0)0sgn[Dsi(t0)],si(t0)=0,Dsi(t0)0sgn[Dmsi(t0)],Dksi(t0)=0,k=0,1,,m-1orbriefly,si(k)=si(k-1),si(k-1)0sgn[Dksi(t0)],si(k-1)=0i=1,,n; k=1,,m. With these sign functionals, define the further functionals(A1) Si(k):=si(k)·Dksi(t0),i=1,,n;k=0,1,,m(A1)

Then, the right derivatives for real vector functions read as follows.

Theorem 6

(p=, real vector function) Let s:R0+Rn be an n-dimensional real-valued vector function that is m times continuously differentiable, and let t0R0+. Suppose additionally that each two components of s are either identical or intersect each other at most finitely often near t0. Further, let I-1={1,,n} and Ik be the set of all indices ikIk-1 where Si(k) from (23) attains its maximum, i.e.Ik:=ikIk-1|Sik(k)=maxiIk-1Si(k)k=1,,m. Then, the right derivatives of ts(t) at t=t00 are given byD+ks(t0)=maxiIk-1Si(k),k=1,,m

Remark

In our case, in Section 7, one has n=2 and m=2. Further, s(t),t0tt0+Δt0 is analytic so that the additional condition is automatically fulfilled.

Determination of symmetric positive semi-definite intensity matrix Q=Qb from Qf

Since with one obtains, using MT=M, the relation BT=B and thusb(t)bT(τ)=Bw(t)wT(τ)BT=Bw(t)wT(τ)B

Thus, according to Müller (Citation1976, Formula (9.7)) with mb(t)=0, one hasNb(t,τ)=E{b(t)bT(τ)}=BE{w(t)wT(τ)}B

Now, Since f(t) is white noise, one has Nf(t,τ)=E{f(t)fT(τ)}=Qfδ(t-τ), leading to This implies giving with

Some details on the computation of y=D+kPx(t)-P2,k=1,2

According to Section 7 and Appendix A.3, the computation of y=D+kPx(t)-P2=D+kΨ(t)2=D+kΦ(t)CΦ(t)2=D+kΦ(t)(P0-P)Φ(t)2,k=1,2 is based onν1,max:=maxk=1,,n0{νk(1)[T~(1)]|theassociatedeigenvectorliesinM0}

andν1,min:=-maxk=1,,n0{νk(1)[-T~(1)]|theassociatedeigenvectorliesinM0}

as well asν2,max:=maxk=1,,n1{νk(2)[T^(2)]|theassociatedeigenvectorliesinM1}

andν2,min:=-maxk=1,,n1{νk(2)[-T^(2)]|theassociatedeigenvectorliesinM1}

where the quantities T~(1) and T^(2) depend on t0.

For all t0=0(Δt)te=0(0.1)25, the constraint “theassociatedeigenvectorliesinM0" resp. “theassociatedeigenvectorliesinM1" was fulfilled with the only exception of that for ν1,min and ν2,min with t0=3.4. The reason for this could not be clarified, however. In this exceptional case, we have set the quantities equal to zero. Since for t0=3.4 we have Ψ(t0)2=max{|λmax(Ψ(t0))|,|λmin(Ψ(t0))|}=|λmax(Ψ(t0))|=λmax(Ψ(t0)), the norm derivatives are given by D+Ψ(t0)2=ν1,max and D+2Ψ(t0)2=2ν2,max, and thus do not depend on ν1,min or ν2,min for t0=3.4, however.

Remark

It is interesting to note that for all t0=0(0.1)25 without exception, only one of the eigenvalues νk(1)[T~(1)],νk(1)[-T~(1)],k=1,,n0 and νk(2)[T^(2)],νk(2)[-T^(2)],k=1,,n1 was different from zero, and further that the above-mentioned constraints can be dropped for the given data without changing the results.

Remark

Finally, we want to remind the reader that, since the operator Ψ(t)=Φ(t)(P0-P)Φ(t),t0, is different from zero as well as finite dimensional, it is self-adjoint and completely continuous. Therefore, according to Achieser and Glasman (Citation1968, no. 60, p. 158) or Kantorowitsch and Akilow (Citation1964, Chapter IX, §4.3, p. 255), Ψ(t) has at least one eigenvalue different from zero for all t0.

Simplification of the computation of D+kΨ(t0)2=D+kPx(t0)-P2,k=1,2

In the case of |λmax(Ψ(t0))||λmin(Ψ(t0))|, one can simplify the computation as follows. Letλ(Ψ(t0))):=λmax(Ψ(t0)),|λmax(Ψ(t0))|>|λmin(Ψ(t0))|λmin(Ψ(t0)),|λmax(Ψ(t0))|<|λmin(Ψ(t0))|

Then,Ψ(t0)2=|λ(Ψ(t0))|=:|s(t0)|

Further, according to the last Remark above, Ψ(t0)0 and therefore s(t0)0, so that in Appendix A.4 with n=1 and s1(t0)=s(t0), we obtain the signss(0)=s(1)=s(2)==s(m)=sgn(s(t0))=sgn((Ψ(t0)))

Thus, from Appendix A.3, we getΨ(t0)2=s(0)ν0D+Ψ(t0)2=s(0)ν1D+2Ψ(t0)2=s(0)2ν2

With the given data in Section 8, we had |λmax(Ψ(t0))|>|λmin(Ψ(t0))| and s(0)=1 for all t0=0(0.1)25. This means Ψ(t)2=|λmax(Ψ(t))|=λmax(Ψ(t)),t=0(0.1)25.