Publication Cover
Mathematical and Computer Modelling of Dynamical Systems
Methods, Tools and Applications in Engineering and Related Sciences
Volume 30, 2024 - Issue 1
127
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Model Reduction of Parametric Differential-Algebraic Systems by Balanced Truncation

ORCID Icon & ORCID Icon
Pages 303-341 | Received 10 Sep 2021, Accepted 05 Mar 2024, Published online: 11 Jul 2024

ABSTRACT

We deduce a procedure to apply balanced truncation to parameter-dependent differential-algebraic systems. For that we solve multiple projected Lyapunov equations for different parameter values to compute the Gramians that are required for the truncation procedure. As this process would lead to high computational costs if we perform it for a large number of parameters, we combine this approach with the reduced basis method that determines a reduced representation of the Lyapunov equation solutions for the parameters of interest. Residual-based error estimators are then used to evaluate the quality of the approximations. After introducing the procedure for a general class of differential-algebraic systems, we turn our focus to systems with a specific structure, for which the method can be applied particularly efficiently. We illustrate the efficiency of our approach on several models from fluid dynamics and mechanics.

1. Introduction

In the modelling of various industrial processes one often obtains systems of differential-algebraic equations (DAEs) that are of high dimension. Typical examples are electrical circuits, thermal and diffusion processes, multibody systems, and specific discretized partial differential equations (Brenan, Campbell, and Petzold Citation1995; Campbell Citation1980). Often, these systems are further dependent on parameters that can describe physical properties such as material constants and which are often subject to optimization. The resulting parameter-dependent differential-algebraic systems have the form

(1a) ddtE(μ)z(t)=A(μ)z(t)+B(μ)u(t),(1a)
(1b) y(t)=C(μ)z(t),(1b)

where E(μ),A(μ)RN,N, B(μ)RN,m, and C(μ)Rp,N are dependent on parameters μD, where DRd. The input, the state, and the output are denoted by u(t)Rm, z(t)RN and y(t)Rp for t[0,). The matrix E(μ) can be a singular. However, throughout this paper, we assume uniform regularity, i.e., the pencil sE(μ)A(μ), which is a matrix polynomial in the variable s, is assumed to be regular for all μD, that is det(sE(μ)A(μ)) is not the zero polynomial for all μD. Moreover, we assume that the system under consideration is uniformly asymptotically stable, i.e., the system (or the pencil sE(μ)A(μ)) is asymptotically stable for all μD, meaning that all finite eigenvalues of the pencil sE(μ)A(μ) have negative real part for all μD. Throughout the derivations provided in this work, we assume that E(), A(), B(), and C() are affine in the parameter μ, i.e.,

(2) E(μ)=E0+k=1nEΘkE(μ)Ek,A(μ)=A0+k=1nAΘkA(μ)Ak,B(μ)=B0+k=1nBΘkB(μ)Bk,C(μ)=C0+k=1nCΘkC(μ)Ck,(2)

where all ΘkE,ΘkA,ΘkB,ΘkC:DR are parameter-dependent continuous functions and Ek, Ak, Bk, Ck are parameter-independent matrices. For reasons of computational efficiency, we always assume that nE,nA,nB,nCN and we also assume that the number of inputs and outputs is small compared to the dimension of the states, i.e., m,pN. For ease of notation, it is sometimes handy to put the matrices E0, A0, B0, and C0 into the sums in (3) and multiply them with the factors Θ0E=Θ0A=Θ0B=Θ0C1. The model (1) is also referred to as the full-order model (FOM). Often, one also considers the input/output mapping of the system in the frequency domain. This relation is typically expressed by the transfer function G(μ,s):=C(μ)(sE(μ)A(μ))1B(μ).

Because of the high state-space dimension N of (1) it is useful to apply reduction methods to extract the essential information of the system and its solution. More precisely, we want to determine a (uniformly regular and asymptotically stable) reduced-order model (ROM)

(3) ddtER(μ)zR(t)=AR(μ)zR(t)+BR(μ)u(t),         yR(t)=CR(μ)zR(t),(3)

with ER(μ),AR(μ)Rr,r, BR(μ)Rr,m, and CRRp,r, where rN. The reduced state and output are zR(t)Rr and yR(t)Rp. The aim is to determine the surrogate model in such a way that the output of the ROM well-approximates the output of the FOM for all admissible inputs u() and all parameters μD. In terms of the transfer function, this means that the error G(μ,)GR(μ,) is sufficiently small in an appropriate norm for all μD, where GR(μ,s) denotes the transfer function of the ROM.

For parameter-independent problems, i.e.,

E(μ)E,A(μ)A,B(μ)B,C(μ)C,G(μ,s)G(s) 

there are several classes of model order reduction techniques. Examples are singular value-based approaches like balanced truncation (Benner et al. Citation2017; Moore Citation1981; Tombs and Postlethwaite Citation1987) and Hankel norm approximations (Glover Citation1984). Additionally, there are Krylov subspace-based methods such as the iterative rational Krylov algorithm (IRKA) (Benner et al. Citation2017; Flagg, Beattie, and Gugercin Citation2012; Gugercin, Antoulas, and Beattie Citation2008; Gugercin, Stykel, and Wyatt Citation2013) and moment matching as well as data-driven methods such as the Loewner framework (Beattie, Gugercin, and Mehrmann Citation2022; Gosea, Poussot-Vassal, and Antoulas Citation2021; Mayo and Antoulas Citation2007). Corresponding methods for parameter-dependent systems are introduced in (Baur, Benner, and Feng Citation2014; Benner et al. Citation2016; Geuss, Panzer, and Lohmann Citation2013).

In this article, we focus on balanced truncation which is one of the most popular reduction techniques. This is mainly due to the guaranteed asymptotic stability of the ROM, the existence of an error bound, and appealing numerical techniques for the involved Lyapunov equations (Benner and Schneider Citation2013; Druskin, Simoncini Citation2011; Schmidt, Wittwar, and Haasdonk Citation2020; Son and Stykel Citation2017). Within balanced truncation, Lyapunov equations corresponding to the original system need to be solved. The solutions of these equations are called Gramians. They describe the input-to-state and state-to-output behavior and are used to construct projection matrices for the reduction. The multiplication of the system matrices with these projection matrices then results in a ROM.

All the above-mentioned methods focus on the case E=IN and are, however, not directly applicable in the DAE case. Even if the problem is not parameter-dependent, there are several challenges that one has to face (here we assume for simplicity, that the problem is parameter-independent):

a) Since the matrix E is typically singular, the transfer function G(s) is possibly improper, i.e., it may have a polynomial part which is unbounded for growing values of |s|. If the reduced transfer function z(t)=zp(t)+zi(t) does not match this polynomial part, then the error transfer function Nf is not an element of the rational Hardy spaces RHp,m or RH2p,m (see, e.g., Zhou, Doyle, and Glover Citation1996) for a definition) and the output error cannot be bounded by the input norm. Thus, a model reduction scheme for DAEs must preserve the polynomial part of its transfer function. This is addressed in (Gugercin, Stykel, and Wyatt Citation2013; Mehrmann and Stykel Citation2005; Stykel Citation2004).

b) In balanced truncation, one has to solve two large-scale Lyapunov equations. In the DAE case, one has to consider a generalization of these – so-called projected Lyapunov equations. Thereby, the Lyapunov equations are projected onto the left and right deflating subspaces corresponding to the finite or infinite eigenvalues of the matrix pencil sEA. This involves specific projectors which are hard to form explicitly and that might destroy the sparsity structure of the coefficient matrices. However, for DAE systems of special structure, the projectors are of a particular form which can be exploited from the numerical point of view. For details, we refer to (Benner, Saak, and Uddin Citation2016; Heinkenschloss, Sorensen, and Sun Citation2008; Saak and Voigt Citation2018; Stykel Citation2008).

Besides the approaches mentioned above, some of the issues in model reduction of DAEs are also addressed in (Alì et al. Citation2014; Banagaaya, Ali, and Schilders Citation2016). Here, the authors introduce an index-aware model order reduction scheme that splits the DAE system into a differential and an algebraic part. However, in this method, it is often difficult to maintain sparsity of the system matrices, and hence, it is in general not feasible for many systems with large dimensions. Also data-driven approaches have been recently extended to differential-algebraic systems, see, e.g. (Antoulas, Gosea, and Heinkenschloss Citation2020; Gosea, Zhang, and Antoulas Citation2020). Finally, we would like to mention (Benner and Stykel Citation2017) which provides an overview of model order reduction for DAE systems.

Different approaches for parametric model reduction have been developed as well. Besides others, there exist two main classes of such methods. First, one could determine global bases for the projection spaces for the entire parametric model, e.g., by concatenating the local bases for different parameter values, see, e.g. (Baur et al. Citation2011). Second, there exist approaches for constructing local bases for a specific parameter value by combining local basis information of several prereduced models at different parameter values, e.g., by interpolating between different models (Eid et al. Citation2011; Geuss, Panzer, and Lohmann Citation2013). We also refer to (Benner, Gugercin, and Willcox Citation2015) for an extensive survey on such methods.

However, to the best of our knowledge, none of the techniques presented in (Benner, Gugercin, and Willcox Citation2015) has already been extended to the case of DAE systems. Therefore, the motivation of this paper is to develop a first method for this purpose based on balanced truncation. If we aim to reduce a parameter-dependent system by balanced truncation in a naive fashion, we would have to solve the Lyapunov equations for each individual parameter of interest, which is computationally prohibitive. To avoid these computational costs, for the case E=IN, the authors in (Son et al. Citation2021) apply some interpolation techniques to approximate the Gramians for all admissible parameters. The authors in (Son and Stykel Citation2017), on the other hand, apply the reduced basis method which is a well-established method to reduce parameter-dependent partial differential equations (Hesthaven, Rozza, and Stamm Citation2016; Kunkel and Mehrmann Citation2006). Using this method, the authors determine the approximate subspaces in which the solutions of the corresponding Lyapunov equations are contained in.

In this paper, we extend the reduced basis method to the case of DAE systems where the parametric projected Lyapunov equations are only solved for few sampling parameters. Then, based on these solutions, a reduced subspace in which the Lyapunov equation solutions for all μD approximately live, is constructed. The latter steps form the computationally expensive offline phase. After that, using the reduced basis representation, the Lyapunov equations can be solved much more efficiently for all μD in the online phase. A crucial question in the offline phase is the choice of the sample parameters. Usually, a grid of test parameters is selected. For these, the error is quantified using a posteriori error estimators. Then, new samples are taken at those parameters at which the error estimator estimates the largest error.

In this paper, we generalize the reduced basis balanced truncation method of (Son and Stykel Citation2017) to differential-algebraic systems with a focus on structured systems from specific application areas. The main problems we solve in this paper are the following:

a) We derive error estimators for parameter-dependent projected Lyapunov equations. These can be improved if the given system is uniformly strictly dissipative. Since this condition is not always satisfied, we discuss transformations to achieve this for systems of a particular index 3 structure.

b) We discuss in detail how the polynomial part of parameter-dependent transfer functions of DAE systems can be extracted by efficiently approximating their Markov parameters.

c) We apply this approach to several application problems and illustrate its effectiveness.

Our method also follows the paradigm of decomposing the procedure into an offline and online phase. In the offline phase, we approximate the image spaces of controllability and observability Gramians that are the solutions of projected Lyapunov equations. These approximate image spaces are then used in the online phase, to determine approximations of the Gramians, and hence, the reduced order model, efficiently for all required parameters. The online phase can be performed for any parameter μD. Due to the reduced dimension obtained by the offline phase, this step is very fast and can be performed repeatedly for all required parameters. Moreover, in this work, we introduce new error estimators, which are used in the offline phase to assess the quality of the reduction.

This paper is organized as follows. In Section 2, two model problems are introduced that motivate the method presented in this paper. In Section 3, we review existing concepts and approaches that will be used later in this paper. Thereby, in Subsection 3.1, we recall projection techniques to decouple the differential and algebraic equations in (Equation1). Afterwards, in Subsection 3.2, we consider model order reduction by balanced truncation for DAE systems. We further address the numerical challenges that arise in computing the solutions of the required Lyapunov equations. Since solving Lyapunov equations for every requested parameter leads to high computational costs, in Section 4 the reduced basis method is presented, which was first applied to standard Lyapunov equations in (Son and Stykel Citation2017). We derive two error estimators for our method, one of them is motivated by the estimator from (Son and Stykel Citation2017), the other one is an adaption of the estimator presented in (Schmidt, Wittwar, and Haasdonk Citation2020). Here, we also discuss the treatment of the algebraic part of the system that may lead to polynomial parts in the transfer function. Finally, in Section 5, we evaluate the method of this paper by applying it to our model problems from Section 2.

2. Model problems

2.1. Problem 1: Stokes-like DAEs

The first example is the system

(4) ddtE(μ)000x(t)λ(t)=A(μ)G(μ)G(μ)T0x(t)λ(t)+B1(μ)B2(μ)u(t),                  y(t)=C1(μ)C2(μ)x(t)λ(t),(4)

which arises, e.g., if we discretize the incompressible Navier–Stokes equation. The parameter-independent version is presented in (Mehrmann and Stykel Citation2005; Stykel Citation2004). The system matrices are dependent on a parameter μD, where DRd is the parameter domain. For fluid-flow problems, the matrices E(μ),A(μ)Rn,n represent the masses and the discretized Laplace operator. Naturally, it holds that E(μ)=E(μ)T>0 and A(μ)=A(μ)T<0 for all μD, where H>0 (H0) denotes a positive (semi)definite matrix H and correspondingly H<0 (H0) denotes a negative (semi)definite matrix. The discrete gradient is given by G(μ)Rn,q which we assume to be of full column rank for all μD. The matrices B1(μ)Rn,m,B2(μ)Rq,m and C1(μ)Rp,n,C2(μ)Rp,q are the input and the output matrices, respectively. The state of the system at time t is given by x(t)Rn and λ(t)Rq. The vectors u(t)Rm and y(t)Rp are the input and output of the system.

In view of (2) we assume that the symmetry and definiteness of E() and A() is represented in the affine parameter structure as

E(μ)=k=0nEΘkE(μ)EkandA(μ)=k=0nAΘkA(μ)Ak,

where ΘkE(), ΘkA() are positive parameter-dependent continuous functions with the convention Θ0E=Θ0A1. Furthermore, E0=E0T>0, A0=A0T<0, Ek=EkT0, Ak=AkT0 for k1 are parameter-independent matrices. Moreover, for reasons of computational efficiency, we always assume that nE,nAn.

2.2. Problem 2: Mechanical systems

The second system we consider is of the form

(5) ddtInx000M(μ)0000x1(t)x2(t)λ(t)=0Inx0K(μ)D(μ)G(μ)G(μ)T00x1(t)x2(t)λ(t)+0Bx(μ)0u(t),\breaky(t)=Cx(μ)00x1(t)x2(t)λ(t),(5)

which results from a linearization of the spring-mass-damper model presented in (Mehrmann and Stykel Citation2005). The mass, damping, and stiffness matrices M(μ),D(μ),K(μ)Rnx,nx are assumed to be symmetric and positive definite for all μD. The matrices Bx(μ)Rnx,m and Cx(μ)Rp,nx are the input and the output matrices. The matrix G(μ)Rnx,q reflects algebraic constraints on the system and is of full column rank. In this example, the state includes the displacement x1(t)Rnx, the velocity x2(t)Rnx, and Lagrange multiplier λ(t)Rq.

For convenience, we also define

(6) E(μ):=Inx00M(μ),A(μ):=0InxK(μ)D(μ),B(μ):=0Bx(μ),C(μ):=Cx(μ)0,n:=2nx.(6)

Then with this notation, we can write the mechanical system similarly as in (4) with the difference that the off-diagonal blocks of the state matrix are not the transposes of each other.

Similarly to the first model problem, we assume that M(), D(), and K() can be written as

M(μ)=k=0nMΘkM(μ)Mk,D(μ)=k=0nDΘkD(μ)Dk,andK(μ)=k=0nKΘkK(μ)Kk,

where ΘkM(), ΘkD(), ΘkK() are positive parameter-dependent continuous functions using the convention Θ0M=Θ0D=Θ0K1. Furthermore, M0=M0T>0, D0=D0T>0, K0=K0T>0 and Mk=MkT0, Dk=DkT0, Kk=KkT0 for k1 are parameter-independent matrices with nM,nD,nKnx.

3. Preliminaries

3.1. Decoupling of differential and algebraic equations

In this section, we briefly describe the projection technique for decoupling the algebraic constraints in DAEs of the form (1) from the remaining differential equations for a fixed parameter μD and under the assumptions of regularity and asymptotic stability. Therefore, for simplicity of presentation, we omit μ in this section. The details of this can be found in (Heinkenschloss, Sorensen, and Sun Citation2008; Saak and Voigt Citation2018; Stykel Citation2004, Citation2006).

As shown in (Berger, Ilchmann, and Trenn Citation2012) there exist regular matrices W,TRN,N that transform the system matrices into quasi-Weierstraß form, that is

(7) E=WINf00NT,A=WJ00INT,(7)

where JRNf,Nf and NRN,N is nilpotent with nilpotency index ν that coincides with the (Kronecker) index of the DAE system (1). Here, Nf is the number of the finite eigenvalues of the matrix pencil sEA and N is the number of the infinite ones. Accordingly, spectral projectors onto the left and right deflating subspace of sEA corresponding to the Nf finite eigenvalues are defined as

Πl:=WINf000W1,Πr:=T1INf000T.

We define

FJ(t):=T1eJt000W1,tRand
FN(k):=T1000NkW1,k=0,,ν1

to derive the states

zp(t):=0tFJ(tτ)Bu(τ)dτ,zi(t):=k=0ν1FN(k)Bu(k)(t),

under the assumption that the input u() is sufficiently smooth and that the consistency conditions are satisfied, i.e., we assume that zi(0):=k=0ν1FN(k)Bu(k)(0). They satisfy z(t)=zp(t)+zi(t) such that z(t) solves EquationEquation (1a). We refer to zp(t) and zi(t) as differential and algebraic states. These states are associated with the proper and improper controllability Gramians

(8) Pp=0FJ(t)BBTFJ(t)Tdt,Pi=k=0ν1FN(k)BBTFN(k)T,(8)

respectively. They span the reachable subspaces of the differential and algebraic states. Analogously, the proper and improper observability Gramians

Qp=0FJ(t)TCTCFJ(t)dt,Qi=k=0ν1FNT(k)CTCFN(k)

span the corresponding observability spaces. The proper Gramians are obtained by solving the projected continuous-time Lyapunov equations

(9) EPpAT+APpET=ΠlBBTΠlT,Pp=ΠrPpΠrT,(9)
(10) ETQpA+ATQpE=ΠrTCTCΠr,Qp=ΠlTQpΠl(10)

and the improper ones by solving the discrete-time Lyapunov equations

(11) APiATEPiET=(INΠl)BBT(INΠl)T,0=ΠrPiΠrT,(11)
(12) ATQiAETQiE=(INΠr)TCTC(INΠr),0=ΠlTQiΠl.(12)

In Subsection 3.2.2 we will describe how these projected Lyapunov equations can be solved.

In practice, the projectors Πl and Πr are computationally expensive to form and even more expensive to compute with as they are dense matrices of large dimension in general. To determine them, the derivations from (Kunkel and Mehrmann Citation2006) can be applied, however they are not numerically stable since a Jordan canonical form needs to be computed. On the other hand, the methods presented in (Banagaaya Citation2014; Campbell, Meyer, and Rose Citation1976), or (März Citation1996) might be used to generate the differential and algebraic state spaces, however, they involve the determination of matrix kernels which are based on matrix decompositions. Suitable parts of the matrices involved in these decompositions are then used to determine the projection matrices Πl and Πr. They are in general not of a sparse structure and, therefore, not applicable if we consider matrices of large dimensions. Hence, if possible, it is advantageous to determine the projection matrices analytically. In many application fields, this is possible because the matrices E and A are of special structure that can be utilized for this purpose. The examples in Section 2 have such a numerically advantageous structure. Further examples can be found in (März Citation1996; Stykel Citation2008).

3.2. Model reduction of differential-algebraic systems

The aim of this section is to review methods to reduce the DAE system (Equation1) for a fixed parameter. We utilize balanced truncation modified for projected systems which is presented in Subsection 3.2.1. Afterwards, in Subsection 3.2.2, we summarize the ADI and Smith methods to solve the involved projected Lyapunov equations. Finally, in Subsection 3.2.3, we present an alternative approach to approximate the algebraic parts of the system (potentially leading to polynomial parts in the transfer function) which is inspired by (Schwerdtner et al. Citation2023).

3.2.1. Balanced truncation

The aim of this subsection is to find a reduced system

(13) ddtERzR(t)=ARzR(t)+BRu(t), yR(t)=CRzR(t),(13)

with ER,ARRr,r,BRRr,m,CRRp,r, rN which approximates the input/output behaviour of the original system (Equation1) (for a fixed parameter).

Since the different Gramians are positive semidefinite, there exist factorizations

Pp=RpRpT,Pi=RiRiT,Qp=SpSpT,Qi=SiSiT

with factors Rp,Ri,Sp,SiRN,N. We consider the singular value decompositions

SpTERp=UpΣpVpT=Up,1Up,2Σp,100Σp,2Vp,1TVp,2T,
(14) SiTARi=UiΣiViT=Ui,1Ui,2Σi,1000Vi,1TVi,2T,(14)

where the matrix Σp=diag(σ1,,σN) is a diagonal matrix with nonincreasing nonnegative entries that are the proper Hankel singular values and Σi,1 is a diagonal matrix including the improper nonzero Hankel singular values. With the matrix Σp,1 which contains the rp largest Hankel singular values and Σi,1Rri,ri we construct the left and right projection matrices

WR:=SpUp,1Σp,112SiUi,1Σi,112andTR:=RpVp,1Σp,112RiVi,1Σi,112

that balance the system, reduce the differential subsystem, and generate a minimal realization of the algebraic subsystem.

We obtain the reduced system (13) by setting

(15) ER:=WRTETR:=Irp00NR,AR:=WRTATR:=JR00Iri,BR:=BR,1BR,2:=WRTB,CR:=CR,1CR,2:=CTR.(15)

The generated reduced system has decoupled differential and algebraic states.

If σrp>σrp+1, then the ROM is asymptotically stable and one can estimate the output error of the reduced system by

(16) yyRL2([0,),Rp)≤∥GGRuL2([0,),Rm)2j=rp+1nσjuL2([0,),Rm),(16)

where G(s):=CsEA1BRHp,m and GR(s):=CRsERAR1BRRHp,m are the transfer functions of the original and the reduced system [Antoulas Citation2005, Theorem 7.9]. The H-norm is defined as G:=supωRσmax(G(iω)) where σmax() denotes the maximum singular value of its matrix argument.

Remark 1. In the balancing procedure outlined above, it is important that the number of inputs and outputs is very small compared to the state-space dimension, i.e., m,pN, which we assume throughout this work. In this case, the Gramians will typically have low numerical rank (Sorenson and Zhou Citation2002), which ensures that a low-rank approximation of the Gramians is possible and numerically efficient and that the FOM can be well-approximated by a ROM of low order. Balanced truncation model reduction is also possible in the case that one of the dimensions m or p is large while the other one is small, see, e.g. (Benner and Schneider Citation2013; Pulch, Narayan, and Stykel Citation2021), However, in this case, the numerical procedure is based on the approximation of an integral by quadrature. Our method cannot be generalized to this more general situation in a straightforward manner, but we believe that it would be an interesting topic for future research.

3.2.2. Solving projected Lyapunov equations

The aim of this subsection is to present numerical techniques to solve the projected Lyapunov equations (Equation10) and (Equation12) in order to approximate the Gramians of system (Equation1). For standard systems with E=IN, there are several methods to solve the corresponding Lyapunov equations. If small-scale matrices are considered, the Bartels-Stewart algorithm (Bartels and Stewart Citation1972) or Hammarling’s method (Hammarling Citation1982) are used. These methods, however, are inefficient in the case of large matrices. In typical applications of practical relevance, the solution of a Lyapunov equation is often of very low numerical rank. So it is desired to compute low-rank factors of these solutions directly. State-of-the-art methods are the ADI method (Kürschner Citation2016; Penzl Citation2000), the sign function method (Benner et al. Citation2017) or Krylov subspace methods (Simoncini and Druskin Citation2009). In the literature, there are also several extensions to projected Lyapunov equations such as (Heinkenschloss, Sorensen, and Sun Citation2008; Stykel Citation2008).

In this section, we utilize the ADI method to solve the projected continuous-time Lyapunov equation (Equation10) and the generalized Smith method to solve the discrete-time Lyapunov equation (Equation12). Here, we follow the ideas from (Stykel Citation2008).

We can use the following lemma from (Stykel Citation2008) to derive an equation that is equivalent to the projected continuous-time Lyapunov equation (Equation9).

Lemma 3.1. Let the matrix pencil sEA with E,ARN,N be regular and asymptotically stable. Let further the matrix A be nonsingular and BRN,m. Assume that the left and right spectral projectors onto the finite spectrum of sEA are denoted by Πl,ΠrRN,N. If pC with Re(p)<0 is not an eigenvalue of the pencil sAE, then the projected discrete-time Lyapunov equation

(17) Pp=S(p)R(p)PpR(p)HS(p)H2Re(p)S(p)ΠlBBTΠlTS(p)H,Pp=ΠrPpΠrT(17)

with

(18) S(p):=(E+pA)1andR(p):=EpA(18)

is equivalent to the projected continuous-time Lyapunov Equation (Equation9), i.e. their solution sets coincide.

Lemma 3.1 motivates the ADI iteration

(19) P0:=0,Pk:=S(pk)R(pk)Pk1R(pk)HS(pk)H2Re(pk)S(pk)ΠlBBTΠlTS(pk)H,k=1,2,.(19)

As shown in (Stykel Citation2008) the following proposition provides the convergence of the iteration (19).

Proposition 3.2. Let the matrix pencil sEA with E,ARN,N be regular and asymptotically stable (i.e., all its finite eigenvalues have negative real part) and let BRN,m. Assume that the left and right spectral projectors onto the finite spectrum of sEA are denoted by Πl,ΠrRN,N. Suppose that we have given a sequence of shift parameters (pk)k0 with Re(pk)<0 for all k0 and with pk+=pk for some 1 and all k=0,1,2,. Then the iteration (19) converges to the solution Pp of the projected Lyapunov equation (Equation9).

Remark 2. The periodicity of the shift parameters ensures that they fulfill a non-Blaschke condition that is sufficient for the convergence of the ADI iteration (Massoudi, Opmeer, and Reis Citation2017).

To work with the ADI iteration more efficiently, our aim is to compute low-rank factors of Pp, i.e., we aim to compute a tall and skinny matrix Z such that PpZZH. We can represent the iteration (20) by the low-rank factors of Pk=ZkZkH with

Zk=κ(pk)S(pk)ΠlBS(pk)R(pk)Zk1=[κ(pk)S(pk)ΠlBκ(pk1)S(pk)R(pk)S(pk1)ΠlB\breakκ(p1)S(pk)R(pk)S(p2)R(p2)S(p1)ΠlB],

where κ(pk)=Re(pk) and Z0 is chosen as the empty matrix in RN,0. We note that the following properties hold for all j,k=0,1,2,:

(20) S(pk)AS(pj)=S(pj)AS(pk),R(pk)A1R(pj)=R(pj)A1R(pk),S(pk)R(pj)=A1R(pj)S(pk)A.(20)

We further define

B0:=κ(pk)S(pk)ΠlBandFj:=κ(pj)κ(pj1)S(pj)R(pj+1),j=1,,k.

Using (20), we obtain

Zk=B0Fk1B0F1Fk1B0.

It remains to solve the discrete-time Lyapunov equation (Equation11). Under the assumption that A is nonsingular, (11) is equivalent to the transformed discrete-time Lyapunov equation

PiA1EPiETAT=A1(INΠl)BBT(INΠl)TAT,0=ΠrPiΠrT.

This equation could be solved with the Smith method (Stykel Citation2008). Since A1(INΠl)=(INΠr)A1 and the matrix (INΠr)A1E=A1E(INΠr) is nilpotent with the nilpotency index ν of the matrix N of the quasi-Weierstrass form of the pencil sEA, the iteration stops after ν steps. This leads to the unique solution

Pi=k=0ν1(A1E)k(INΠr)A1BBTAT(INΠr)T((A1E)T)k.

We note that we have to solve multiple linear systems with the matrix A to obtain the solution Pi. Therefore, we can utilize the sparse LU decomposition to do this efficiently. Instead of computing the full matrix Pi we can also generate the low-rank factors Pi=YYT as

Y=(IΠr)A1BA1E(IΠr)A1B(A1E)ν1(IΠr)A1B.

Remark 3. In many relevant applications, the solution Pi can be explicitly calculated by making use of specific block structures in sEA. Moreover, the improper Hankel singular values are often zero (namely, when the FOM’s transfer function is strictly proper) in which case we do not have to solve the improper Lyapunov equations at all, see also (Stykel Citation2006).

3.2.3. Alternative approach to determine the algebraic parts

Since the reduced basis method presented later in this paper is applied only to the differential part of the system, in this section we discuss ways to construct reduced representations of the algebraic part. One way is to use the reduction provided by the improper Gramians as explained in Subsection 3.2.1. This is a suitable choice for parameter-independent problems. However, it is difficult to build a reduced representation for parameter-dependent problems, if the improper part depends on parameters. Then, the discrete-time Lyapunov equations have to be solved for each parameter value of interest in the online phase which can be too expensive. For this reason, in this subsection, we present an alternative approach that is also viable in the parametric setting and allows an efficient evaluation in the online phase (see also Subsection 4.3).

Our method is inspired by (Schwerdtner et al. Citation2023) in which the polynomial part of the transfer functions of DAE systems of index at most two is approximated, but the methodology can also be extended to systems of even higher index. Consider the transfer function G(s) of system (1) (for a fixed parameter) which can be decomposed into its strictly proper part and a polynomial part

G(s)=Gsp(s)+Gpoly(s),

where the polynomial part is given as

Gpoly(s)=k=0ν1CT1000NkW1Bsk,

with T, W, and N with index of nilpotency ν as in (8), see (Stykel Citation2004) for the details. Hence, for W1B=:B1B2 and CT1=:C1C2 partitioned conforming to the partioning of (8), the polynomial part can be rewritten as

Gpoly(s)=k=0ν1C2NkB2sk,

where Mk=C2NkB2, k=0,,ν1 denote the k-th Markov parameters of the transfer function. As shown in (Antoulas, Gosea, and Heinkenschloss Citation2020), the strictly proper part Gsp(iω) converges to zero for ω. For this reason we can approximate the Markov parameters if we consider sufficiently large values of ωR. We exemplify the methodology for a few cases.

The index one case In this case, we simply have

M0G(iω)

for a sufficiently large value of ωR and moreover, Mk=0 for k1.

The index two case In this case, Mk=0 for k2 and hence Gpoly(s)=M0+M1s. By inserting iωiR we obtain that

ReGpoly(iω)=M0,ImGpoly(iω)=ωM1.

Hence, for sufficiently large values ωR we can approximate

M0ReG(iω)andM1ImG(iω)ω.

The index three case In the index three case, we approximate the polynomial part of the form Gpoly(s)=M0+M1s+M2s2. Inserting iωiR into the polynomial part of the transfer function Gpoly provides a real and an imaginary part

ReGpoly(iω)=M0ω2M2,ImGpoly(iω)=ωM1.

Then the Markov parameters are approximated by

M2ReG(iω1)G(iω2)ω22ω12,M1ImG(iω)ω,M0ReG(iω)+ω2M2

for sufficiently large ω, ω1, and ω2 with ω1ω2.

System realization from Markov parameters If the polynomial part Gpoly(s)=k=0ν1Mksk has been extracted and all Markov parameters Mk, k=0,,ν1 have been determined we can realize it by a DAE system as follows. Assume that Mk=UkΣkVkT are compact SVDs of Mk with positive definite ΣkRrk,rk for all k=1,,ν1. Then for k=1,,ν1 we define the matrices

Bˆk:=00VkTRkrk,m,Cˆk:=Uk00Rp,krk

and

Eˆk:=0Σk1kΣk1k0Rkrk,krk,sothatEˆkk=00Σk00.

This yields

CˆksEˆkIkrk1Bˆk=j=0kCˆkEˆkjBˆksj=CˆkEˆkkBˆksk=Mksk.

Now we obtain an overall realization of the algebraic part of the system as

ddtEˆ1Eˆν1z1(t)zν1(t)=z1(t)zν1(t)+Bˆ1Bˆku(t),
                              yˆ(t)=Cˆ1Cˆkz1(t)zν1(t)+M0u(t).

4. Reduced basis method

In this section, we return to the problem of reducing parameter-dependent systems. Since the solution of the Lyapunov equations is the most expensive part of balanced truncation, we aim to limit the number of Lyapunov equation solves. To achieve this, the reduced basis method presented by (Son and Stykel Citation2017) can be applied. Besides the regularity and asymptotic stability of the matrix pencil sE(μ)A(μ) for all μD we impose the following assumption on the problem (1) for the rest of the paper.

Assumption 4.1. We assume that DRd is a bounded and connected domain. We assume further that the matrix-valued functions E(), A(), B(), C() in (1) as well as the left and right spectral projector functions Πl() and Πr() on the deflating subspace associated with the finite eigenvalues of (sEA)() depend continuously on the parameter μ.

Assumption 4.1 ensures that the solutions of the parameter-dependent projected Lyapunov equations also depend continuously on μD. Then, also the ROM depends continuously on μ for a fixed reduced order.

In practice, many examples are continuously parameter-dependent, e.g., in electrical circuits, the resistances can be continuously dependent on the temperature. Moreover, inductances and capacitances may be assumed continuous if one wishes to tune them within a parameter optimization. In other examples, for instance, mechanical systems, the parameters may represent (continuously varying) material constants. On the other hand, there are also examples where the dependence on the parameters is not continuous. This is, for example, the case for electrical circuits in which the network topology changes by switching. These discontinuities are often known since, in practice, the user has insight into the considered problem. In this case, the parameter domain could be decomposed into several subdomains, on which the reduction can be performed separately.

In the following, we will focus on the reduction of the differential part of the DAE system (1). The main idea consists of finding a reduced representation of the solutions of the Lyapunov equations (Equation9) of the form

(21) Pp(μ)Z(μ)Z(μ)HwithZ(μ)=V(μ)Z˜(μ)μD,(21)

where V(μ)RN,rV has orthonormal columns with Πr(μ)V(μ)=V(μ) and where Πr(μ) denotes the projector onto the right deflating subspace of sE(μ)A(μ) for the parameter value μ. Under the assumption that V(μ)TE(μ)V(μ) is invertible and the pencil sV(μ)TE(μ)V(μ)V(μ)TA(μ)V(μ) is regular and asymptotically stable (This condition can be guaranteed a priori by the strict dissipativity assumptions in Subsection 4.2), the low-rank factor Z˜(μ) solves the reduced Lyapunov equation

V(μ)TE(μ)V(μ)Z˜(μ)Z˜(μ)HV(μ)TA(μ)TV(μ)
(22) +V(μ)TA(μ)V(μ)Z˜(μ)Z˜(μ)HV(μ)TE(μ)TV(μ)=V(μ)TΠl(μ)B(μ)B(μ)TΠl(μ)TV(μ).(22)

The equation Πr(μ)V(μ)=V(μ) replaces the condition

Πr(μ)(V(μ)Z˜(μ))(V(μ)Z˜(μ))HΠr(μ)T=(V(μ)Z˜(μ))(V(μ)Z˜(μ))H.

Algorithm 1 Reduced basis method for projected Lyapunov equations (offline phase)

Input: Dynamical system (1), test parameter set DTest, tolerance Tol.

Output: Reduced basis matrix Vglob

1: Choose any μ1DTest.

2: Solve the projected Lyapunov equation (Equation24) for μ=μ1 to obtain Z(μ1).

3: Set M:={μ1}.

4: Set Vglob:=orth(Z(μ1)).

5: Set μˆ:=argmaxμDTestMΔV(μ).

6: Set ΔVmax:=ΔV(μˆ).

7: while ΔVmax>Tol do

8: Solve the projected Lyapunov equation (Equation24) for μ=μˆ to obtain Z(μˆ).

9: Set M:=M{μˆ}.

10: Set Vglob:=orth([Vglob,Z(μˆ)]).

11: Set μˆ:=argmaxμDTestMΔV(μ).

12: Set ΔVmax:=ΔV(μˆ).

13: end while

In practice, we aim to determine a matrix VglobRN,rglob with Nrglob that contains the information of V(μ) for all μD globally. When we have determined such a Vglob, we set

(23) V(μ)=orth(Πr(μ)Vglob),(23)

where orth(Πr(μ)Vglob) denotes the orthonormalized columns of Πr(μ)Vglob. If the matrix Vglob has been found, then the ROM for a particular parameter μD can be determined very efficiently in the so-called online phase, where one simply solves (22) with (21) and (23) to determine the two Gramians and then projects the system as in Subsection 3.2.1.

The computation of the reduced basis Vglob is done beforehand in the offline phase. There, we solve the full-order Lyapunov equation only for those parameters, where their solutions are currently approximated worst to enrich the current reduced basis. A posteriori error estimators are employed on a test parameter set to find these points efficiently.

We determine Vglob by considering a test parameter set DTestD. We begin by computing a low-rank factor Z(μ1) in μ1DTest such that Pp(μ1)=Z(μ1)Z(μ1)H solves the projected Lyapunov equation

(24) E(μ)Pp(μ)A(μ)T+A(μ)Pp(μ)E(μ)T=Πl(μ)B(μ)B(μ)TΠl(μ)T,Πr(μ)Pp(μ)Πr(μ)T=Pp(μ)(24)

for μ=μ1.

The first reduced basis is then given by Vglob:=orth(Z(μ1)). Next, we determine the test parameter μ2DTest for which the solution Pp(μ2) of the Lyapunov equation (Equation24) is approximated worst by using (22), (23), and (24). For that, one of the error estimators ΔV(μ) presented in Subsection 4.1 is utilized. For this parameter μ2, we solve the projected Lyapunov equation (Equation24) with μ=μ2 to obtain the low-rank factor Z(μ2). Then, we define the new reduced basis by setting Vglob:=orth([Vglob,Z(μ2)]). This procedure is repeated until the error estimator ΔV is smaller than a prescribed tolerance for every test parameter in DTest. This method results in Algorithm 1.

Remark 4. Under the assumption that the projected Lyapunov equations (Equation9), (Equation10) are solved exactly for all parameters μD using the reduced Lyapunov equations of the form (22), the H error bound (16) is also valid in the entire parameter domain. Unfortunately, this bound is no longer valid if numerical approximation errors affect the numerically computed Gramians which is already the case in the parameter-independent case. A general and in-depth analysis of the effect of such errors on the Hankel singular values is still an open problem. On the other hand, the numerically computed Hankel singular values can still be used to obtain an approximate error bound.

4.1. Error estimation

We want to estimate the error E(μ):=Pp(μ)Z(μ)Z(μ)H with the help of the residual

(μ):=A(μ)Z(μ)Z(μ)HE(μ)T+E(μ)Z(μ)Z(μ)HA(μ)T+Πl(μ)B(μ)B(μ)TΠl(μ)T.

Lemma 4.2. Let μD be given. Let the matrix pencil sE(μ)A(μ) with E(μ),A(μ)RN,N be regular and asymptotically stable. Assume that the left and right spectral projectors onto the finite spectrum of sE(μ)A(μ) are denoted by Πl(μ),Πr(μ)RN,N. Let the controllability Gramian Pp(μ)RN,N be defined as in (8) and let it be approximated as described in EquationEquation (21). Then the error E(μ):=Pp(μ)Z(μ)Z(μ)H solves the projected error equation

(25) A(μ)E(μ)E(μ)T+E(μ)E(μ)A(μ)T =(μ)=Πl(μ)(μ)Πl(μ)T,E(μ)=Πr(μ)E(μ)Πr(μ)T. (25)

Proof. The condition

E(μ)=Πr(μ)E(μ)Πr(μ)T

follows from Pp(μ)=Πr(μ)Pp(μ)Πr(μ)T and Z(μ)=Πr(μ)Z(μ). For the remaining equation, we consider its left-hand side

A(μ)E(μ)E(μ)T+E(μ)E(μ)A(μ)T=A(μ)Pp(μ)E(μ)T+E(μ)Pp(μ)A(μ)TA(μ)Z(μ)Z(μ)HE(μ)TE(μ)Z(μ)Z(μ)HA(μ)T=Πl(μ)B(μ)B(μ)TΠl(μ)TA(μ)Z(μ)Z(μ)HE(μ)TE(μ)Z(μ)Z(μ)HA(μ)T.

With Πl(μ)A(μ)=A(μ)Πr(μ), Πl(μ)E(μ)=E(μ)Πr(μ) and Z(μ)=Πr(μ)Z(μ) we obtain

A(μ)E(μ)E(μ)T+E(μ)E(μ)A(μ)T=Πl(μ)B(μ)B(μ)TΠl(μ)TΠl(μ)A(μ)Z(μ)Z(μ)HE(μ)TΠl(μ)TΠl(μ)E(μ)Z(μ)Z(μ)HA(μ)TΠl(μ)T

which proves the statement.

Similarly to (Son and Stykel Citation2017), we consider the linear system

Lμxμ=bμ,xμ=Πrμxμ

with

(26) (μ)  ​​​​​:=A(μ)E(μ)E(μ)A(μ),  x(μ)  ​​​​​:=vec(Pp(μ)),b(μ):=vec(Πl(μ)BBTΠl(μ)T),Πr(μ):=Πr(μ)Πr(μ), (26)

where the operator denotes the Kronecker product and vec the vectorization operator that stacks the columns of the matrix on top of one another. This linear system is equivalent to the projected Lyapunov equation (Equation24). We note that L(μ) is a singular matrix and that the unique solvability is enforced by the additional restriction given by the projection with Πr(μ).

Using the linear system representation and the abbreviations e(μ):=vec(E(μ)), r(μ):=vec((μ)), we can rewrite the error EquationEquation (25) as

Lμeμ=rμ,eμ=Πrμeμ.

This equation is equivalent to

(27) ΠlμLμΠrμeμ=Πlμrμ=rμ,whereΠlμ:=ΠlμΠlμ.(27)

Due to the unique solvability of the latter, we obtain

e(μ)= Πl(μ)(μ)Πr(μ) r(μ),

where () denotes the Moore-Penrose inverse. Then, we can estimate

(28) ​​E(μ)F=​​e(μ)​​2​​ Πl(μ)L(μ)Πr(μ)2r(μ)​​ 2= 1 σmin+ Πl(μ)L(μ)Πr(μ)​​(μ)​​​F1α(μ)​​(μ)​​F=:ΔV(1)(μ).(28)

Here, σmin+() denotes the smallest positive singular value of its matrix argument and α(μ) is an easily computable lower bound of σmin+ ΠI(μ)(μ)Πr(μ). The bound ΔV(1)(μ) is the first possible error estimator that provides a rather conservative bound in practice.

To improve the estimator (28), we consider the error bound presented in (Schmidt, Wittwar, and Haasdonk Citation2020). Assume that E^(μ)E(μ) is an error estimate that is not necessarily an upper bound. An error bound ΔV(2)(μ) is derived by

(29) E(μ)F=∥E(μ)+Eˆ(μ)Eˆ(μ)F≤∥Eˆ(μ)F+E(μ)Eˆ(μ)F≤∥Eˆ(μ)F+1α(μ)ˆ(μ)F:=ΔV(2)(μ),(29)

where the residual ˆ(μ) is defined as

ˆ(μ):=A(μ)Z(μ)Z(μ)H+Eˆ(μ)E(μ)T+E(μ)Z(μ)Z(μ)H+Eˆ(μ)A(μ)T\break+Πl(μ)B(μ)B(μ)TΠl(μ)T.

It remains to obtain an error estimate Eˆ(μ) for all μDTest. To do so, we solve the error EquationEquation (26) approximately by modifying the projected ADI iteration from Section 3.2.2 and the reduced basis method in Algorithm 1.

The right-hand side of the error EquationEquation (25) can be written as the product Bl(μ)Br(μ)H, where

Bl(μ):=Πl(μ)A(μ)Z(μ)E(μ)Z(μ)B(μ),
Br(μ):=Πl(μ)E(μ)Z(μ)A(μ)Z(μ)B(μ).

For simplicity of notation, we consider the modified ADI iteration for the parameter-independent case such that E(μ) E,Bl(μ)Bl, Br(μ)Br, and Πl(μ)Πl. Analogously to the derivation in Section 3.2.2 (and under the same conditions), it can be shown that the iteration

Ek=Zl,k(Zr,k)H=S(pk)R(pk)Ek1R(pk)HS(pk)H2Re(pk)S(pk)BlBrTS(pk)H=S(pk)R(pk)Zl,k1(Zr,k1)HR(pk)HS(pk)H2Re(pk)S(pk)BlBrTS(pk)H

converges to the solution E of (25), where S(pk) and R(pk) are defined as in (18) for a shift parameter pkC with Re(pk)<0. The factors Zl,k and Zr,k can be written as

Zl,k=Bl,0Fk1Bl,0F1Fk1Bl,0,\breakZr,k=Br,0Fk1Br,0F1Fk1Br,0,

where

Bl,0:=κ(pk)S(pk)ΠlBl,Br,0:=κ(pk)S(pk)ΠlBrandFj:=κ(pj)κ(pj1)S(pj)R(pj+1)

for j=1,,k and with κ(pj)=2Re(pj).

We include the modified projected ADI iteration into the reduced basis method presented in Algorithm 1 by solving the error EquationEquation (25) in Step 2 and 8 to obtain the two factors Zl(μ),Zr(μ) with Zl(μ)Zr(μ)H E(μ). The orthonormal basis computation in Step 4 and 4 is replaced by Vglob:=orth([Vglob,Zl(μ)]). Note that here, the columns of Zl(μ) and Zr(μ) span the same subspace. As error estimator, we use ΔV(1). After we have determined the orthonormal basis, we solve EquationEquation (25) on the corresponding subspace to get an approximate error E^(μ), that we use for the error estimator ΔV(2)(μ).

Remark 5. If the factor Z(μ)RN,nZ is already of large dimension nZ, the right-hand side of the error EquationEquation (25) described by Bl(μ) and Br(μ) is of even larger rank 2nZ+m, and hence, the solution E(μ) cannot in general be well-represented by low-rank factors. This means that the error space basis might be large which leads to a time consuming error estimation.

4.2. Strictly dissipative systems

For the model examples from Section 2 we can make use of their structure to derive the bound α(μ) in a computationally advantageous way. Therefore, we will impose an additional assumption on the matrix pencils sE(μ)A(μ) in (4) and (6) that is formed from submatrices of E(μ) and A(μ).

Definition 4.3. The set {sE(μ)A(μ)R[s]n,n|μD} of matrix pencils is called uniformly strictly dissipative, if

E(μ)=E(μ)T>0andAS(μ):=12(A(μ)+A(μ)T)<0μD.

We will consider this condition for our motivating examples from Section 2 and determine a corresponding lower bound α(μ).

4.2.1. Stokes-like systems

We consider the Stokes-like system (4). Here, the pencil set {sE(μ)A(μ)|μD} is naturally uniformly strictly dissipative.

As shown in (Stykel Citation2006) the projection matrices are given as

(30) ⁋ial(μ)=⁋iar(μ)T=⁋ia(μ)⁋ia(μ)A(μ)E(μ)1G(μ)(G(μ)TE(μ)1G(μ))100(30)

where

(31) ⁋ia(μ)=InG(μ)(G(μ)TE(μ)1G(μ))1G(μ)TE(μ)1.(31)

Even though for the Stokes-like system in (4) we have AS(μ)=A(μ)<0 for all μD, in this section we will already treat the more general case of a uniformly strictly dissipative set of matrix pencils sE(μ)A(μ) with nonsymmetric A(μ) as this case will arise later in Subsection 4.2.2.

We continue finding a value for α(μ).

Theorem 4.4. Let μD be given and consider the Stokes-like system (4). Let AS(μ), L(μ), Πrμ, ΠIμ be defined as in Definition 4.3, (26), (27) with Πl(μ)=Πr(μ)T as in (30), respectively. Then the bound

σmin+ ΠI(μ)(μ)Πr(μ)2λmin(E(μ))λmin(AS(μ))

is satisfied, where λmin() denotes the minimum eigenvalue of its (symmetric) matrix arguments.

Proof. Let Π(μ) be as in (31). Making use of the condition Π(μ)G(μ)=0, we can deduce

ΠI(μ)L(μ)Πr(μ)=Πl(μ)A(μ)G(μ)G(μ)T0Πr(μ)Πl(μ)E(μ)000Πr(μ)Πl(μ)E(μ)000Πr(μ)Πl(μ)A(μ)G(μ)G(μ)T0Πr(μ)=Π(μ)A(μ)Π(μ)T000Π(μ)E(μ)Π(μ)T000Π(μ)E(μ)Π(μ)T000Π(μ)A(μ)Π(μ)T000

Define Π(μ):=Π(μ)Π(μ) and the matrix

L(μ):=A(μ)E(μ)E(μ)A(μ).

Denoting the smallest positive eigenvalue of a symmetric matrix by λmin+(), Corollary 3.1.5 from (Horn and Johnson Citation1991) and the eigenvalue properties of the Kronecker product, given in (Horn and Johnson Citation1991; Lancaster and Tismenetsky Citation1985) (see also (Son and Stykel Citation2017) as well as a Rayleigh quotient argument lead to the estimate

​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​σmin+ ΠI(μ)(μ)Πr(μ)=σmin+(Π(μ)L(μ)Π(μ)T)
                        λmin+12Π(μ)L(μ)+L(μ)TΠ(μ)T
                                     =minvimΠ(μ)T,v2=112vTΠ(μ)L(μ)+L(μ)TΠ(μ)Tv
      =minvim Π(μ)T,1ptv2=112vT(L(μ)+L(μ)T)v
            minvRn2,v2=112vTL(μ)+L(μ)Tv
                                    =λmin12L(μ)+L(μ)T=2λmin(E(μ))λmin(AS(μ)).

Since computing the eigenvalues of E(μ) and A(μ) for all requested parameters μ can be expensive, there are several ways to find cheap lower bounds of 2λmin(E(μ))λmin(AS(μ)), some of which are derived in (Son and Stykel Citation2017). Based on these bounds, we can, e.g., estimate

(32) 2λmin(E(μ))λmin(AS(μ))2mink=0,,nEΘkE(μ)ΘkE(μˉ)λmin(E(μˉ))mink=0,,nAΘkA(μ)ΘkA(μˉ)λmin(AS(μˉ))\break=:α(μ),(32)

where μˉ is an arbitrary-fixed parameter in D and where we make use of the convention that Θ0E=Θ0A1. Here, we compute the eigenvalues of E(μˉ) and AS(μˉ) only once to find a lower bound α(μ) of σmin+ Πl(μ)(μ)Πr(μ) for every parameter μD.

4.2.2. Mechanical systems

we turn to the mechanical system from Subsection 2.2. There, the corresponding matrix pencil set

sInx00M(μ)0InxK(μ)D(μ)R[s]2nx,2nx|μD

is not uniformly strictly dissipative. However, it can be made uniformly strictly dissipative by appropriate generalized state-space transformations as we will show now.

Here, we apply a transformation presented in (Eid et al. Citation2011; Panzer, Wolf, and Lohmann Citation2012). We observe that by adding productive zeros and for an auxiliary function γ:D(0,) yet to be chosen, the system (5) is equivalent to the system

ddtK(μ)x(t)+γ(μ)d2dt2M(μ)x(t)=γ(μ)K(μ)x(t)+ddtK(μ)x(t)γ(μ)ddtD(μ)x(t)+γ(μ)G(μ)λ(t)+γ(μ)Bxu(t),\breakγ(μ)ddtM(μ)x(t)+d2dt2M(μ)x(t)=K(μ)x(t)+γ(μ)ddtM(μ)x(t)\breakddtD(μ)x(t)+G(μ)λ(t)+Bx(μ)u(t),\break0=γ(μ)G(μ)Tx(t),\break0=ddtG(μ)Tx(t),

where x(t) is equal to x1(t) and ddtx(t) equal to x2(t) from system (5). The last equation is a direct consequence from the equation above and poses no extra restrictions. These equations can be written as first-order system. By defining the matrices

(33) E(μ):=K(μ)γ(μ)M(μ)γ(μ)M(μ)M(μ),A(μ):=γ(μ)K(μ)K(μ)γ(μ)D(μ)K(μ)D(μ)+γ(μ)M(μ),(33)
B(μ):=γ(μ)Bx(μ)Bx(μ),

and replacing G(μ) by B(μ):=γ(μ)Bx(μ)Bx(μ), we generate a system in the form (5) such that we can apply the methods from Subsection 4.2.1 for this kind of system in the following, if γ(μ) is chosen properly. This is possible, because even if A(μ) is unsymmetric, the projector Π(μ) in (32) will be the same in both left and right spectral projectors Πl(μ) and Πr(μ) and hence, the estimate in Theorem 4.4 is still valid. Only an estimation of the form (32) is not possible, since the matrices E(μ) and AS(μ) cannot be written as affine decompositions in which all summands are positive or negative semidefinite, respectively. So in our numerical experiments we make use of Theorem 4.4 directly.

Theorem 4.5. Consider the mechanical system (5). Then the matrix pencil set {sE(μ)A(μ)|μD} with E(μ) and A(μ) defined in (33) is uniformly strictly dissipative, if 0<γ(μ)<γ(μ) with

γ(μ)=λmin(D(μ))λmax(M(μ))+14λmax(D(μ))2λmax(K(μ)1)

is satisfied for all μD.Proof. As shown in (Panzer, Wolf, and Lohmann Citation2012), the pencil set {sE(μ)A(μ)|μD} is uniformly strictly dissipative, if we choose γ(μ)>0 such that

γ(μ)<λminD(μ)M(μ)+14D(μ)K(μ)1D(μ)1μD.

Using the estimate λmin(XY)λmin(X)λmin(Y) for symmetric and positive definite matrices X and Y of conforming dimension (Wang, Zhang Citation1992, Theorem 4), we arrive at the choice for γ(μ) given by

(34) γ(μ)λmaxM(μ)+14D(μ)K(μ)1D(μ)<λmin(D(μ))μD.(34)

Because of the symmetry and positive definiteness of D(μ)K(μ)1D(μ) and the submultiplicativity of the norm we obtain

λmaxD(μ)K(μ)1D(μ)=D(μ)K(μ)1D(μ)2
                                  ≤∥D(μ)22K(μ)12=λmax(D(μ))2λmax(K(μ)1).

By Weyl’s lemma (Horn and Johnson Citation1991; Lancaster and Tismenetsky Citation1985), we also have

λmaxM(μ)+14D(μ)K(μ)1D(μ)λmax(M(μ))+14λmax(D(μ)K(μ)1D(μ)).

Therefore, condition (34) is satisfied, if we choose γ(μ) such that

γ(μ)λmax(M(μ))+14λmax(D(μ))2λmax(K(μ)1)<λmin(D(μ))μD.

This can be achieved by the choice

(35) γ(μ)<λmin(D(μ))λmax(M(μ))+14λmax(D(μ))2λmax(K(μ)1).(35)

To avoid computing the eigenvalues of M(μ),D(μ),K(μ) for every parameter μ, we further use the estimates

λmax(D(μ))λmax(D0)+k=1nDΘkD(μ)λmax(Dk)=k=0nDΘkD(μ)λmax(Dk),
λmin(D(μ))λmin(D0)+k=1nDΘkD(μ)λmin(Dk)=k=0nDΘkD(μ)λmin(Dk),

that are a consequence of Weyl’s lemma. Similar estimates are also valid for M(μ) and K(μ). Then we can further estimate γ(μ) as

(36) γ(μ)<γ(μ)=k=0nDΘkD(μ)λmin(Dk)k=0nMΘkM(μ)λmax(Mk)+14k=0nDΘkD(μ)λmax(Dk)2/k=0nKΘkK(μ)λmin(Kk),(36)

while ensuring that we can choose γ(μ)>0 for all μD.

The benefit of the above estimate is that the extremal eigenvalues of the matrices Mk, Dk, and Kk have to be computed only once in the beginning and otherwise, its evaluation is cheap, since ΘkM(), ΘkD(), ΘkK() are scalar-valued functions.

4.3. Treatment of the algebraic parts

The goal of this subsection is to discuss the reduction of the algebraic part of the system which is potentially also parameter-dependent. To do so, we first precompute specific matrices in the offline phase which will later be used to efficiently construct the Markov parameters in the online phase for a particular parameter configuration and then determine a reduced representation of the algebraic subsystem using the construction from the last paragraph in Subsection 3.2.3. In order for this construction to be efficient, we assume that the parameter influence has an additional low-rank structure, i.e., with regard to the affine decomposition (3), we assume that Ek, Ak, Bk, and Ck are low-rank matrices for k1, i.e., they admit low-rank factorizations

(37) Ek=UE,kVE,kT,k=1,,nE,Ak=UA,kVA,kT,k=1,,nA.(37)

To determine the Markov parameters, for a fixed set of sample points ωiR, i=1,,nS, we aim to evaluate G(μ,iωi) efficiently for all needed parameters μD. In other words, we aim for an efficient procedure to evaluate the transfer function G(μ,iω) for fixed value ωR and varying μ. Due to (38) and since nE and nA are assumed to be very small, also the matrix iωk=1nEΘkE(μ)Ekk=1nAΘkA(μ)Ak is of low rank and we can factorize it as

iωk=1nEΘkE(μ)Ekk=1nAΘkA(μ)Ak
=UE,1TUE,nETUA,1TUA,nATTiωΘ1E(μ)iωΘnEE(μ)Θ1A(μ)ΘnAA(μ)VE,1TVE,nETVA,1TVA,nAT=:(μ,iω)VT.

It holds that

G(μ,iω)=i=0nCj=0nBΘiC(μ)ΘjB(μ)CiiωE0A0+(μ,iω)VT1Bj.

With the help of the Sherman-Morrison-Woodbury identity, we obtain

CiiωE0A0+(μ,iω)VT1Bj=CiiωE0A01BjCiiωE0A01UΩ(μ,iω)1\break+VTiωE0A01U1VTiωE0A01Bj.

The matrices CiiωE0A01Bj, CiiωE0A01U, VTiωE0A01U, and VTiωE0A01Bj are all of small dimension and they can be precomputed in the offline phase. In the online phase, the small matrix Ω(μ,iω)1+VTiωE0A01U can be formed (under the assumption that all necessary matrices are invertible) and used for small and dense system solves in order to calculate G(μ,iω) with only little effort.

5. Numerical results

In this section, we present the numerical results for the two differential-algebraic systems from Section 2. The computations have been done on a computer with 2 Intel Xeon Silver 4110 CPUs running at 2.1 GHz and equipped with 192 GB total main memory. The experiments use MATLAB®R2021a (9.3.0.713579) and examples and methods from M-M.E.S.S.-2.2 (Saak, Köhler, and Benner Citation2022).

5.1. Problem 1: Stokes equation

We consider a system that describes the creeping flow in capillaries or porous media. It has the following structure

(38) ddtv(ζ,t)=μΔv(ζ,t)∇p(ζ,t)+f(ζ,t),         0=div(v(ζ,t)),(38)

with appropriate initial and boundary conditions. The position in the domain ΩR is described by ζΩ and t0 is the time. For simplicity, we use a classical solution concept and assume that the external force f:Ω×[0,)R is continuous and that the velocities v:Ω×[0,)R and pressures p:Ω×[0,)R satisfy the necessary smoothness conditions. The parameter μD represents the dynamic viscosity. We discretize system (38) by finite differences as shown in (Mehrmann and Stykel Citation2005; Stykel Citation2006) and add an output equation. Then, we obtain a discretized system of the form (4), which is of index 2. The matrix A(μ)=μARn,n depends on the viscosity parameter μ.

Figure 1. Error and error estimates of the approximated controllability Gramians for the Stokes system (4) after the first iteration of the reduced basis method.

Figure 1. Error and error estimates of the approximated controllability Gramians for the Stokes system (4) after the first iteration of the reduced basis method.

Figure 2. Results for the reduction of the Stokes system (4).

Figure 2. Results for the reduction of the Stokes system (4).

Our constant example matrices E=In,A,G,B1,C1 are created by the M-M.E.S.S. function stokes_FVM and we choose the dimensions n=3280,q=1681, m,p=1, and the parameter set D=12,32. The test parameter set DTest consists of ten equidistant points within D, i. e., DTest=12+19k|k=0,,9. The matrices B2 and C2 are zero matrices. The reduced basis method from Section 4 produces bases of dimensions 24 for both projected continuous-time Lyapunov equations. This basis generation is done in the offline phase, which takes 2.7103 seconds for this example. For the first iteration of the reduced basis method corresponding to the proper controllability Lyapunov equation, we evaluate the errors and their estimates from EquationEquation (28) and (Equation29). They are presented in , where the error is evaluated by

error≈∥Ppacc(μ)Z(μ)Z(μ)HF,

where Ppacc(μ) denotes an accurate approximation of the exact solution Pp(μ). For evaluating the error, the Lyapunov equations are also solved with the residual tolerance 1015 to obtain an accurate estimate of the exact solution. After the first iteration step, we obtain an error that is already smaller than the tolerance tol=104. Additionally, we see that the error estimator ΔV(2) provides an almost sharp bound of the actual error while ΔV(1) leads to more conservative error bounds.

After we have determined the bases, we approximate the proper controllability and observability Gramians to apply balanced truncation in the online phase for two different parameters μ=1.28 and μ=0.65 where the first parameter is the eighth entry from the test parameter set DTest. The reduced systems for both parameters are of dimension r=10 and consist only of differential states. This online phases last for 3.7102 and 1.9102 seconds, respectively, for this example. depict the transfer functions of the full and the reduced systems and the corresponding errors. We observe that the error is smaller than 108 for all ω in [104,104]. Additionally, in , we show the output y(t) of the full system and the output yR(t) of the reduced system for an input u(t)=2sin(t), t[0,10], and the initial condition is given as z(0)=0. We also depict the error y(t)yR(t)2 which is smaller than 108 in the entire time range. Hence, we can observe that the system behaviour is well-approximated by the reduced system.

Figure 3. Output and output error of the original and reduced Stokes system (4).

Figure 3. Output and output error of the original and reduced Stokes system (4).

Now, to demonstrate that the methods presented in this paper are also suitable for systems with an improper transfer function, we modify the system in such a way that the matrices B2 and C2 are chosen to be B2=001T and C2=1001. That way, we obtain more complex restrictions for GTx(t) and also evaluate the pressure described by λ(t) to generate an output y(t). We additionally assume that the external force represented by the input function B1(μ)u(t) varies its magnitude depending on the viscosity of the system so that the input matrix B1(μ)=(1+μ)B1 is parameter-dependent, and hence, the polynomial part of the transfer function is parameter-dependent as well.

Again, in the offline phase, we execute the reduced basis method to generate bases that approximate the proper controllability and observability spaces (i.e., the image space of the corresponding proper Gramians) of this system. This phase takes 9.3103 seconds for this example. In , the errors and error estimations in the two stages of the reduced basis method corresponding to the proper controllability space are shown. For evaluating the error, the Lyapunov equations are also solved with the residual tolerance 1015 to obtain an accurate estimate of the exact solution. We can observe that after the first iteration step, we obtain an error that is significantly larger than the tolerance 104 as shown in . Additionally, we see that the error estimator ΔV(2) provides an almost sharp bound of the actual error while ΔV(1) leads to more conservative error bounds. The second iteration of the reduced basis method and the corresponding errors are depicted in . In this case, the error as well as the estimators are smaller than the tolerance tol=104. The final basis projection space dimensions for the projected continuous-time Lyapunov equations are 48 and 54.

The next step is to use the bases to generate approximations of the system Gramians and to apply balanced truncation, which leads to a reduced system of dimension 25 where we obtain 23 differential states and 2 algebraic ones for both parameters μ=1.28 and μ=0.65. The online phase lasts for 3.2 and 3.1 seconds if we use balanced truncation via the improper Gramians to reduce the improper system components and 2.4 seconds for both parameters if we extract the polynomial part of the transfer function. However, in this example, the parameter dependency does not have a low-rank structure. Thus, for the determination of the polynomial part, it is necessary to solve one linear system of equations of full order in the online phase for each parameter of interest. Explicit formulas of the improper Gramians are also described in (Stykel Citation2006). For a fixed parameter, also an explicit formula

(39) SiTARi=B11C2(GTG)1B2C2(GTG)1B20(39)

is provided with

B11:=C1G(GTG)1B2+C2(GTG)1GTB12,B12:=B1AG(GTG)1B2,

compare with (14). This formula replaces the computation of the low-rank factors of the improper Gramians in the online phase. In our setting, (Equation39) can be evaluated efficiently since all involved matrices can be precomputed and then appropriately scaled with μ in the parametric setting.

Figure 4. Error and error estimates of the approximated gramians for the stokes system (4) with improper parts.

Figure 4. Error and error estimates of the approximated gramians for the stokes system (4) with improper parts.

Figure 5. Results for the reduction of the stokes system (4) with improper parts.

Figure 5. Results for the reduction of the stokes system (4) with improper parts.

We demonstrate the quality of the reduced systems by evaluating the error of the transfer functions. In the following, we denote the reduced system (error system) with an algebraic part generated by balanced truncation as reduced BT (error BT) and the reduced system (error system) with an algebraic part obtained by approximating the polynomial part of the transfer function as reduced TF (error TF). We show the sigma plot of the original and reduced transfer function as well as the error for two parameters. The first parameter, depicted in , is μ=1.28DTest that is an element from the test parameter set and hence we already know that the error estimators ΔV(1/2)(μ) are smaller than the tolerance and the proper controllability space is well-approximated. The second parameter is μ=0.65D, and the corresponding results are shown in . The polynomial part of the transfer function is clearly observable since it does not tend to zero for growing values of ω. We observe that for both parameters, the error is smaller than 105 in the entire frequency band [104,106] and that the parameter chosen from the test parameter set does not lead to significantly better approximations than an arbitrarily chosen parameter from the parameter set D. This allows the conclusion that the basis Vglob approximates the proper controllability space well – for all parameters in D. Additionally, we can observe that both procedures to approximate the improper part of the system lead to satisfactory results.

Finally, to illustrate that the output y is well approximated by the reduced output yR, in we evaluate the norms of the output error y(t)yR(t)2 for t[0,100] for the test parameter μ=1.28 and the parameter μ=0.65. We choose the input function u(t)=55cos(t) and the initial condition z(0)=x(0)λ(0)=0, in such a way that the consistency conditions are satisfied. We obtain an output error that is smaller than 105 for all t[0,100] for both evaluated parameters. Again, we observe that both approximations of the algebraic part of the system lead to results of similar quality.

Figure 6. Output and output error of the original and reduced Stokes system (4) with improper parts.

Figure 6. Output and output error of the original and reduced Stokes system (4) with improper parts.

5.2. Problem 2: Mechanical system

Figure 7. Constrained mass-damper-spring system including three chains of masses that are connected by the mass m3+1.

Figure 7. Constrained mass-damper-spring system including three chains of masses that are connected by the mass m3ℓ+1.

As an example for system (5), we consider a constrained mass-spring-damper system which is depicted in and which is similar to the one considered in (Ugrica Citation2020). The matrices are generated using the function triplechain_MSD from the M-M.E.S.S.-2.2. toolbox, see (Saak, Köhler, and Benner Citation2022) where the mass matrix M is built as

M=M1M2M3m0,Mi=miIR,,i=1,2,3

for m1=m2=m3=m0=1. The stiffness matrix is built as

K=K11κ1K22κ2K33κ3κ1Tκ2Tκ3Tk1+k2+k3+k0,Kii=ki2112112112R,,

with κi=00kiT and k1=k2=k3=k0=1. We set =200 so that the dimension of M and K is n=601 and the matrix GRn,1 is a zero matrix except for the entries G1,1=1 and Gn,1=1. Hence, the first and the last mass are additionally connected by a rigid structure, as shown in . Hence, the first-order system is of dimension 2n+2=1204 after applying the transformation presented in Subsection 4.2.2. The input matrix BxRn,1 is chosen as a zero matrix with one nonzero entry (Bx)450,1=1. The output matrix CxR1,n is given by Cx=BxT.

These systems occur in the field of damping optimization, see (Benner, Tomljanović, and Truhar Citation2011; Truhar, Tomljanović, and Puvača Citation2019; Truhar and Veselić Citation2009; Ugrica Citation2020), where the damping matrix DRn,n consists of an internal damping Dint and some external dampers that need to be optimized. In our example, we choose Rayleigh damping as a model for the internal damping, i.e., Dint=ηM+ρK, where we set η=0.01 and ρ=0.02. To describe the external damping, we assume that every mass is connected to a grounded damper, where the masses m1,,m are connected by dampers with viscosity μ1, the masses m+1,,m2 by dampers with viscosity μ2 and the masses m2+1,,m3 by dampers with viscosity μ3 as depicted in . The mass m3+1 is connected to a grounded damper with fixed viscosity 1. The external dampers can be described by the matrices

F1=I02+1,2+1,F2=0,I0+1,+1,F3=02,2I0,F4=03,31,

where 0n1,n2 denotes the zero matrix in Rn1,n2. Therefore, the overall damping matrix can then be constructed as

D(μ):=Dint+μ1F1+μ2F2+μ3F3+F4,

where μ=[μ1,μ2,μ3]D are the viscosities of the dampers, which are optimized and thus variable in our model. This model leads to a differential-algebraic system of index 3. We generate the index-2 surrogate system as shown in Subsection 4.2.2 using the value γ(μ)=12γ(μ) from (36).

We consider two parameter settings. In the first one, the system is damped more strongly, and in the second one more weakly. The latter setting is used to illustrate that the slightly damped system is more difficult to approximate by projection spaces of small dimensions, which poses a challenge from the numerical point of view. For both parameter settings, the reduced models are obtained by truncating all state variables corresponding to proper Hankel singular values with σp,k/σp,1<108.

5.2.1. Setting 1: Stronger external damping

We consider the parameter set D=[0.1,1]3 and choose the test parameter set DTest to consist of 50 randomly selected parameter triples from D. Therefore, we use the rand operator from MATLAB so that the k-th test parameter is equal to μ=0.1+0.9rand(3,1) with fixed seed.

Figure 8. Error and error estimates of the approximated controllability Gramians of the mechanical system (5) with stronger external damping and μ1=0.25, μ3=0.35 after the first iteration of the reduced basis method.

Figure 8. Error and error estimates of the approximated controllability Gramians of the mechanical system (5) with stronger external damping and μ1=0.25, μ3=0.35 after the first iteration of the reduced basis method.

Figure 9. Results for the reduction of the mechanical system (5) with stronger external damping.

Figure 9. Results for the reduction of the mechanical system (5) with stronger external damping.

In the offline phase, the approximation of the controllability and observability spaces takes 1.4104 seconds for this example. For the proper controllability space, we evaluate the errors and the error estimates that are depicted in for the first iteration of the reduced basis method. For evaluating the error, the Lyapunov equations (Equation9) are also solved by computing the projection matrices Πl(μ) and Πr(μ) explicitly and solving the corresponding Lyapunov equation. Therefore, we fix the parameters μ1=0.25 and μ3=0.35 and show the errors for the varying second parameter μ2[0.1,1]. After the first iteration and for a basis of dimension 219, the maximal error estimate ΔV(2) is equal to 9.48105. Hence, the basis approximates the proper controllability space sufficiently well. Again, we see that the error estimator ΔV(2) approximates the error E() better than the estimation by ΔV(1), which massively overestimates the error. Hence, we only use the error estimator ΔV(2) to determine the basis quality.

We need two iterations of the reduced basis method to determine the respective observability subspace of dimension of 304. Hence, we have to solve Lyapunov equations of dimensions 219 and 304 in the online phase to compute the Gramian approximations. We apply the online phase for the parameters μ=0.2622,0.1175,0.5169DTestD (which is rounded to 4 digits here) and μ=0.15,0.15,0.5D. We obtain the two respective reduced systems of dimension 75 and 63, where all states are differential ones. The online phases take 6.4101 and 1.3101 seconds, respectively, for this example.

Figure 10. Output and output error of the original and reduced system (5) with stronger external damping.

Figure 10. Output and output error of the original and reduced system (5) with stronger external damping.

In , we depict the original, the reduced, and the error transfer function. Additionally, we evaluate the outputs of the original and the reduced system y and yR. The results are shown in , where we observe the norm of the output error y(t)yR(t)2 for t[0,50]. We choose the input function u(t)=11cos(t) and the initial condition z(0)=x(0)λ(0)=0, so that the consistency conditions are satisfied. We observe that the output error is smaller than 102 for all t[0,50]. Note that for damping optimization examples, the proper controllability and observability spaces are often hard to approximate by spaces of small dimensions, see (Benner et al. Citation2016; Tomljanović, Beattie, and Gugercin Citation2018). Hence, the tolerances are chosen rather large for this kind of example to achieve satisfying results.

5.2.2. Setting 2: Weaker external damping

Now, we consider a parameter set with viscosities of small magnitudes, that is D=[0.01,0.1]3. The test-parameter set DTest consists of 50 randomly selected parameters from D. To generate the test parameters, we again use the rand operator from MATLAB so that the k-th parameter is equal to μ=0.01+0.09rand(3,1) with fixed seed.

Figure 11. Error and error estimates of the approximated observability Gramians of the mechanical system (5) with weaker external damping and μ1=0.025, μ3=0.035 after the first iteration of the reduced basis method.

Figure 11. Error and error estimates of the approximated observability Gramians of the mechanical system (5) with weaker external damping and μ1=0.025, μ3=0.035 after the first iteration of the reduced basis method.

Figure 12. Results for the reduction of the mechanical system (5) with weaker external damping.

Figure 12. Results for the reduction of the mechanical system (5) with weaker external damping.

First, we apply the offline phase of the reduced basis method that takes 6.9103 seconds. When computing the proper controllability space, two iterations are needed to generate a subspace of dimension 449 that leads to error estimates ΔV(2)(μ) that are at most 5.6104. For the proper observability space, the errors and the error estimates are illustrated in within the first iteration of the reduced basis method with fixed parameters μ1=0.025 and μ3=0.035 and varying μ2[0.01,0.1]. For evaluating the error, the Lyapunov equations (Equation10) are also solved by computing the projection matrices Πl(μ) and Πr(μ) explicitly and solving the corresponding Lyapunov equation. After the first iteration, we determine a basis of dimension 223 for which the maximal error estimate ΔV(2) is equal to 5.8104, and hence, sufficient accuracy is achieved. As for the first parameter setting, the error estimator ΔV(2) approximates the error () better than the estimation by ΔV(1). Hence, we use the error estimator ΔV(2) within the reduced basis method. We apply the online phase for the parameters μ=0.0886,0.0972,0.0882DTestD (which is rounded to 4 digits) and μ=0.01,0.02,0.01D to derive a reduced-order model of dimension 78 for the first parameter and the reduced dimension 178 for the second one. The online phases need 2.21 and 2.04 seconds, respectively.

Figure 13. Output and output error of the original and reduced mechanical system (5) with weaker external damping.

Figure 13. Output and output error of the original and reduced mechanical system (5) with weaker external damping.

depicts the original, the reduced, and the error transfer function. Additionally, we evaluate the norm of the output error y(t)yR(t)2 for t[0,50] in , where the input function u(t)=11cos(t) and the initial condition z(0)=x(0)λ(0)=0 are chosen in such a way that the consistency conditions are satisfied. We observe that the output error is smaller than 104 for all t[0,50].

6. Conclusions

This paper has addressed the model reduction of parameter-dependent differential-algebraic systems by balanced truncation. To apply the balancing procedure, we have utilized specific projectors to eliminate the algebraic constraints. This has enabled us to compute the necessary Gramians efficiently by solving projected Lyapunov equations.

To handle the parameter-dependency, we have applied the reduced basis method, which is split into the offline phase and the online phase. In the offline phase, we have computed the basis of the subspace which approximates the solution space of the parametric Lyapunov equations. To assess the quality of this subspace, we have derived two error estimators. Afterwards, in the online phase, we have solved a reduced Lyapunov equation to obtain an approximation of the Gramian for a parameter value of interest efficiently. Therefore, a balanced truncation for this parameter value can also be carried out fast.

This method has been illustrated by numerical examples of index two and three. In particular, we were often able to reduce the associated Lyapunov equations to very small dimensions. We have evaluated our error estimators, where the second one can often estimate the error almost exactly.

Novelty statement

We propose a new method that reduces parametric differential-algebraic equations by combining projection methods for differential-algebraic equations and the reduced basis methods for Lyapunov equations. To apply the reduced basis method, new error estimators are invented. The performance of our new method is illustrated for several examples.

Code availability

The code and data that has been used to generate the numerical results of this work are freely available under the DOI 10.5281/zenodo.10040792 under the 3-clause BSD license and is authored by Jennifer Przybilla.

CRediT author statement

Jennifer Przybilla: Methodology, Software, Data Curation, Writing – Original Draft, Writing – Review & Editing, Visualization; Matthias Voigt: Conceptualization, Writing – Original Draft, Writing – Review & Editing, Supervision.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Alì G, Banagaaya N, Schilders WHA, Tischendorf C. 2014. Index-aware model order reduction for differential-algebraic equations. Math Comput Model Dyn Syst. 20(4):345–373. doi:10.1080/13873954.2013.829501.
  • Antoulas AC. 2005. Approximation of Large-Scale Dynamical Systems. In: Advance Des Control Vol. 6 Philadelphia, PA: SIAM Publications, doi:10.1137/1.9780898718713.
  • Antoulas AC, Gosea IV, Heinkenschloss M. 2020. Data-driven model reduction for a class of semi-explicit DAEs using the Loewner framework. In: Reis T and Ilchmann A, editors. Progress in Differential-Algebraic Equations II, Differ.-Algebr. Equ. Forum. Cham: Springer; p. 185–210. doi: 10.1007/978-3-030-53905-4_7.
  • Banagaaya N. 2014. Index-Aware Model Order Reduction Methods for DAEs. The Netherlands: Proefschrift, Technische Universiteit Eindhoven. doi:10.6100/IR780937.
  • Banagaaya N, Ali G, Schilders WHA. 2016. Index-aware Model Order Reduction Methods. In: Atlantis Studies Science Computer Electromagnet. Paris: Atlantis Press; Vol. 2.
  • Bartels RH, Stewart GW. 1972. Solution of the matrix equation AX+XB=C: Algorithm 432. Comm ACM. 15(9):820–826. doi:10.1145/361573.361582.
  • Baur U, Beattie CA, Benner P, Gugercin S. 2011. Interpolatory projection methods for parameterized model reduction. SIAM J Sci Comput. 33(5):2489–2518. doi:10.1137/090776925.
  • Baur U, Benner P, Feng L. 2014. Model order reduction for linear and nonlinear systems: A system-theoretic perspective. Arch Comput Methods Eng. 21(4):331–358. doi:10.1007/s11831-014-9111-2.
  • Beattie CA, Gugercin S, Mehrmann V. 2022. Structure-preserving interpolatory model reduction for port-Hamiltonian differential-algebraic systems. In: Beattie C, Benner P, Embree M, Gugercin S, and Lefteriu S, editors. Realization and Model Reduction of Dynamical Systems. Cham: Springer; p. 235–254. doi:10.1007/978-3-030-95157-3_13.
  • Benner P, Gugercin S, Willcox K. 2015. A survey of projection-based model reduction methods for parametric dynamical systems. SIAM Rev. 57(4):483–531. doi:10.1137/130932715.
  • Benner P, Kürschner P, Tomljanović Z, Truhar N. 2016. Semi-active damping optimization of vibrational systems using the parametric dominant pole algorithm. ZAMM Z Angew Math Mech. 96(5):604–619. doi:10.1002/zamm.201400158.
  • Benner P, Ohlberger M, Cohen A, Willcox K. 2017. editors, Model Reduction and Approximation: Theory and Algorithms. Comput. Sci. Eng. SIAM. doi:10.1137/1.9781611974829.
  • Benner P, Saak J, Uddin MM. 2016. Balancing based model reduction for structured index-2 unstable descriptor systems with application to flow control. Numer Algebra Cont Optim. 6(1):1–20. doi:10.3934/naco.2016.6.1.
  • Benner P, Schneider A. 2013. Balanced truncation for descriptor systems with many terminals. Max Planck Institute Magdeburg Preprints MPIMD/13–17, Available at https://csc.mpi-magdeburg.mpg.de/preprints/2013/MPIMD13-17.pdf.
  • Benner P, Stykel T. 2017. Model order reduction for differential-algebraic equations: A survey. In: Ilchmann A and Reis T, editors. Surveys in Differential-Algebraic Equations IV, Differ.-Algebr. Equ. Forum. Cham: Springer; p. 107–160. doi:10.1007/978-3-319-46618-7_3.
  • Benner P, Tomljanović Z, Truhar N. 2011. Dimension reduction for damping optimization in linear vibrating systems. ZAMM Z Angew Math Mech. 91(3):179–191. doi:10.1002/zamm.201000077.
  • Berger T, Ilchmann A, Trenn S. 2012. The quasi-Weierstraß form for regular matrix pencils. Linear Algebra Appl. 436(10):4052–4069. doi:10.1016/j.laa.2009.12.036.
  • Brenan KE, Campbell SL, Petzold LR. 1995. Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations. Philadelphia, PA: SIAM. doi:10.1137/1.9781611971224.
  • Campbell SL. 1980. Singular Systems of Differential Equations. In: Research Notes in Mathematics Vol. 40. San Francisco: Pitman Advanced Publishing Program, doi:10.1002/nme.1620150916.
  • Campbell SL, Meyer CD, Rose NJ. 1976. Applications of the Drazin inverse to linear systems of differential equations with singular constant coefficients. Comput Math Appl. 31(3):411–425. doi:10.1137/0131035.
  • Druskin V, Simoncini V. 2011. Adaptive rational Krylov subspaces for large-scale dynamical systems. Syst Cont Lett. 60(8):546–560. doi:10.1016/j.sysconle.2011.04.013.
  • Eid R, Castañé-Selga R, Panzer H, Wolf T, Lohmann B. 2011. Stability-preserving parametric model reduction by matrix interpolation. Math Comput Model Dyn Syst. 17(4):319–335. doi:10.1080/13873954.2011.547671.
  • Flagg G, Beattie C, Gugercin S. 2012. Convergence of the iterative rational Krylov algorithm. Syst Cont Lett. 61(6):688–691. doi:10.1016/j.sysconle.2012.03.005.
  • Geuss M, Panzer H, Lohmann B. On parametric model order reduction by matrix interpolation. In Proceedings of the 12th European Control Conference, 3433–3438, Strasbourg, France, 2013. doi:10.23919/ECC.2013.6669829.
  • Glover K. 1984. All optimal Hankel-norm approximations of linear multivariable systems and their L∞ -error bounds. Internat J Control. 39(6):1115–1193. doi:10.1080/00207178408933239.
  • Gosea IV, Poussot-Vassal C, Antoulas AC. 2021. Data-driven modeling and control of large-scale dynamical systems in the Loewner framework: methodology and applications. In: Zuazua E and Trelat E, editors. Numerical Control: Part A, Hadb Numer Anal. Elsevier: North-Holland.
  • Gosea IV, Zhang Q, Antoulas AC. 2020. Preserving the DAE structure in the Loewner model reduction and identification framework. Adv Comput Math. 46(3). doi:10.1007/s10444-020-09752-8.
  • Gugercin S, Antoulas AC, Beattie C. 2008. model reduction for large-scale linear dynamical systems. SIAM J Matrix Anal Appl. 30(2):609–638. doi:10.1137/060666123.
  • Gugercin S, Stykel T, Wyatt S. 2013. Model reduction of descriptor systems by interpolatory projection methods. SIAM J Sci Comput. 35(5):B1010–B1033. doi:10.1137/130906635.
  • Hammarling SJ. 1982. Numerical solution of the stable, non-negative definite Lyapunov equation. IMA J Numer Anal. 2(3):303–323. doi:10.1093/imanum/2.3.303.
  • Heinkenschloss M, Sorensen DC, Sun K. 2008. Balanced truncation model reduction for a class of descriptor systems with application to the Oseen equations. SIAM J Sci Comput. 30(2):1038–1063. doi:10.1137/070681910.
  • Hesthaven JS, Rozza G, Stamm B. 2016. Certified Reduced Basis Methods for Parametrized Partial Differential Equations. Cham: SpringerBriefs Math. Springer. doi:10.1007/978-3-319-22470-1.
  • Horn RA, Johnson CR. 1991. Topics in Matrix Analysis. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511840371.
  • Kunkel P, Mehrmann V. 2006. Differential-Algebraic Equations: Analysis and Numerical Solution. In: Textbooks in Mathematics. Zürich, Switzerland: EMS Publishing House.
  • Kürschner P. 2016, Efficient Low-Rank Solution of Large-Scale Matrix Equations. Dissertation, Otto-von-Guericke-Universität, Magdeburg, Germany: http://hdl.handle.net/11858/00-001M-0000-0029-CE18-2.
  • Lancaster P, Tismenetsky M. 1985. The Theory of Matrices. 2nd edition ed. Orlando: Academic Press.
  • März R. 1996. Canonical projectors for linear differential algebraic equations. Comput Math Appl. 31(4/5):121–135. doi:10.1016/0898-1221(95)00224-3.
  • Massoudi A, Opmeer MR, Reis T. 2017. The ADI method for bounded real and positive real Lur’e equations. Numer Math. 135(2):431–458. doi:10.1007/s00211-016-0805-2.
  • Mayo AJ, Antoulas AC. 2007. A framework for the solution of the generalized realization problem. Linear Algebra Appl. 425(2–3):634–662. doi:10.1016/j.laa.2007.03.008.
  • Mehrmann V, Stykel T. 2005. Balanced truncation model reduction for large-scale systems in descriptor form. In: Benner P, Mehrmann V, and Sorensen DC, editors. Dimension Reduction of Large-Scale Systems, volume 45 Lect Notes Comput Sci Eng. Berlin/Heidelberg: Springer; p. 83–115. doi:10.1007/3-540-27909-1_3.
  • Moore BC. 1981. Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans Automat Control. AC-26(1):17–32. doi:10.1109/TAC.1981.1102568.
  • Panzer H, Wolf T, Lohmann B. 2012. A strictly dissipative state space representation of second order systems at-Automatisierungstechnik. 60(7):392–397. doi:10.1524/auto.2012.1015.
  • Penzl T. 2000. A cyclic low rank Smith method for large sparse Lyapunov equations. SIAM J Sci Comput. 21(4):1401–1418. doi:10.1137/S1064827598347666.
  • Pulch R, Narayan A, Stykel T. 2021. Sensitivity analysis of random linear differential–algebraic equations using system norms. J Comput Appl Math. 397:113666. doi:10.1016/j.cam.2021.113666.
  • Quarteroni A, Manzoni A, Negri F. 2016. Reduced Basis Methods for Partial Differential Equations. In: of La Matematica per il 3+2. Cham: Springer International Publishing; Vol. 92.
  • Saak J, Köhler M, Benner P. M-M.E.S.S.-2.2 – The Matrix Equations Sparse Solvers library, February 2022. doi:10.5281/zenodo.5938237.
  • Saak J, Voigt M. 2018. Model reduction of constrained mechanical systems in M-M.E.S.S. IFAC-PapersOnLine. 51(2):661–666. doi:10.1016/j.ifacol.2018.03.112. 9th Vienna International Conference on Mathematical Modelling, Vienna, 2018.
  • Schmidt A, Wittwar D, Haasdonk B. 2020. Rigorous and effective a-posteriori error bounds for nonlinear problems – Application to RB methods. Adv Comput Math. 46:32. doi:10.1007/s10444-020-09741-x.
  • Schwerdtner P, Moser T, Mehrmann V, Voigt M. 2023. Optimization-based model order reduction of port-Hamiltonian descriptor systems. Syst Control Lett. 182:105655. doi:10.1016/j.sysconle.2023.105655.
  • Simoncini V, Druskin V. 2009. Convergence analysis of projection methods for the numerical solution of large Lyapunov equations. SIAM J Num Anal. 47(2):828–843. doi:10.1137/070699378.
  • Son NT, Gousenbourger P-Y, Massart E, Stykel T. 2021. Balanced truncation for parametric linear systems using interpolation of Gramians: a comparison of algebraic and geometric approaches. In: Benner P, Breiten T, Faẞbender H, Hinze M, Stykel T, and Zimmermann R, editors. Model Reduction of Complex Dynamical Systems, volume 171 International Series of Numerical Mathematics. Cham: Birkhäuser; p. 31–51. doi:10.1007/978-3-030-72983-7_2.
  • Son NT, Stykel T. 2017. Solving parameter-dependent Lyapunov equations using the reduced basis method with application to parametric model order reduction. SIAM J Matrix Anal Appl. 38(2):478–504. doi:10.1137/15M1027097.
  • Sorenson DC, Zhou Y. 2002. Bounds on eigenvalue decay rates and sensitivity of solutions to Lyapunov equations. CAAM Technical Reports TR02-07, Rice University, Available at https://repository.rice.edu/items/7d1381b1-adc7-4282-8486-430784cef15c
  • Stykel T. 2004. Gramian-based model reduction for descriptor systems. Math Cont Sign Syst. 16(4):297–319. doi:10.1007/s00498-004-0141-4.
  • Stykel T. 2006. Balanced truncation model reduction for semidiscretized Stokes equation. Linear Algebra Appl. 415(2–3):262–289. doi:10.1016/j.laa.2004.01.015.
  • Stykel T. 2008. Low-rank iterative methods for projected generalized Lyapunov equations. Electron Trans Numer Anal. 30:187–202. URL http://etna.mcs.kent.edu/vol.30.2008/pp187-202.dir/pp187-202.pdf
  • Tombs MS, Postlethwaite I. 1987. Truncated balanced realization of a stable non-minimal state-space system. Internat J Control. 46(4):1319–1330. doi:10.1080/00207178708933971.
  • Tomljanović Z, Beattie C, Gugercin S. 2018. Damping optimization of parameter dependent mechanical systems by rational interpolation. Adv Comp Math. 44(6):1797–1820. doi:10.1007/s10444-018-9605-9.
  • Truhar N, Tomljanović Z, Puvača M. 2019. Approximation of damped quadratic eigenvalue problem by dimension reduction. Appl Math Comput. 347:40–53. doi:10.1016/j.amc.2018.10.047.
  • Truhar N, Veselić K. 2009. An efficient method for estimating the optimal dampers’ viscosity for linear vibrating systems using Lyapunov equation. SIAM J Matrix Anal Appl. 31(1):18–39. doi:10.1137/070683052.
  • Ugrica M. Approximation of Quadratic Eigenvalue Problem and Application to Damping Optimization. Doctoral thesis, University of Zagreb, Faculty of Science, Zagreb, Croatia, 2020. URL https://urn.nsk.hr/urn:nbn:hr:217:106554.
  • Wang B, Zhang F. 1992. Some inequalities for the eigenvalues of the product of positive semidefinite Hermitian matrices. Linear Algebra Appl. 160:113–118. doi:10.1016/0024-3795(92)90442-D.
  • Zhou K, Doyle JC, Glover K. 1996. Robust and Optimal Control. Upper Saddle River, NJ: Prentice-Hall.