![MathJax Logo](/templates/jsp/_style2/_tandf/pb2/images/math-jax.gif)
Abstract
In this article, we discuss the time-domain dimension reduction methods for second-order systems by general orthogonal polynomials, and present a structure-preserving dimension reduction method for second-order systems. The resulting reduced systems not only preserve the second-order structure but also guarantee the stability under certain conditions. The error estimate of the reduced models is also given. The effectiveness of the proposed methods is demonstrated by three test examples.
1. Introduction
A variety of problems lead to large second-order systems, such as circuit simulation, structural mechanical systems, computational electromagnetics and microelectronic mechanical systems (MEMS) [Citation1–Citation3]. In many areas of engineering, second-order systems have the following form:
To reduce computational and resource demands, and to compute solutions and controls in acceptable time, dimension reduction techniques are applied. In particular, the structure-preserving dimension reduction of second-order systems [Citation4,Citation5] has been an active topic during recent years. We can essentially follow two paths during the computation of the reduced-order model (ROM) of second-order systems. We can rewrite the system (1) in first-order representation (2), and apply dimension reduction methods to the equivalent first-order model. However, this approach has a major disadvantage: it ignores the physical meaning of the original system matrices, and the reduced-order system is no longer in a second-order form generally.
On the contrary, we could decide to preserve the second-order structure of the system, and compute a second-order reduced model of the form:
During the last decades, a number of dimension reduction techniques have been proposed. Overview articles of the different reduction techniques are given in [Citation6–Citation8] and the references therein. Among those methods, the main dimension reduction techniques for second-order systems are based on balanced truncation (BT) [Citation9–Citation13] on the one hand, and Krylov subspace related [Citation2,Citation14–Citation16] on the other hand. In the BT methods, solving the Lyapunov equation (or a Sylvester equation) is a key step. Direct solution of the Lyapunov equation is only possible for medium-scale models because the solution requires operations and
storages, where n is the dimension of the system. Several methods have been proposed to extend the range of applicability of BT to large-scale systems. One approach is to implement some algorithms like the LR-ADI or Smith algorithm that can solve large Lyapunov or Sylvester equations approximately, leading to the low rank Gramians. Then, low rank Gramians are used for approximate balanced truncation [Citation9,Citation17–Citation21]. The Krylov subspace method is superior in numerical efficiency with the low cost of computation, but in general, the stability of the original system may be lost and there is no general error bound similar to BT except under some special conditions [Citation22].
Apart from the above-mentioned dimension reduction methods, a number of methods based on orthogonal polynomials, such as the Chebyshev polynomial method [Citation23,Citation24], the Laguerre-SVD method [Citation25–Citation27] and the general orthogonal polynomials method [Citation28,Citation29], have also received some attentions in dimension reduction of large-scale systems.
In this article, we discuss the time-domain dimension reduction methods based on orthogonal polynomials for second-order systems. A structure-preserving dimension reduction method is presented. This method expands the state and output variables in the space spanned by orthogonal polynomials, and then a structure-preserving reduced model is obtained by a projection transformation. The resulting reduced system not only preserves the second-order structure but also guarantees the stability under certain conditions. Additionally, an error estimate is given, and the effectiveness of this new method is shown by three test examples.
The remainder of this article is organized as follows. In Section 2, we briefly introduce some preliminary properties of general orthogonal polynomials. In Section 3, we present the dimension reduction methods based on orthogonal polynomials, including dimension reduction of the equivalent first-order system and the method of second-order system. The error and stability of the reduced system are also discussed in this section. Three test examples are given in Section 4. Conclusions are presented in Section 5.
Throughout this article, the following notation is used. The set of real numbers is denoted by . The
identity matrix is denoted by
and the zero matrix by 0. If the dimension of
is apparent from the context, we simply use I. The dimension of 0 will always be apparent from the context.
for a square matrix
denotes that A is nonnegative definite, i.e.
for every
. And
denotes that A is positive definite, i.e.
for all vectors x, except
.
2. General orthogonal polynomials
The orthogonal polynomials , which are in the orthogonal polynomial space
, with respect to the weight function
over the interval
satisfy the condition
Note that the Chebyshev polynomials, Legendre polynomials, Laguerre polynomials, Hermite polynomials and Jacobi polynomials are all derived from the following recurrence relation:
An important property of orthogonal polynomials is the differential recurrence relation of the form:
Table 1. Differential recurrence coefficients for classical orthogonal polynomials.
Orthogonal polynomials have been successfully used to reduce very large systems by many researchers, see [Citation24,Citation25,Citation28,Citation29] and the references therein. It should be pointed out that the Chebyshev polynomials, Legendre polynomials and Jacobi polynomials are suitable for dimension reduction in the domain , while the Hermite polynomials and Laguerre polynomials are suitable in the domain
. And the suitable transformations are required to change the intervals on which orthogonal polynomials are defined into those for specific problems. Using the formula
, we can transform the domain
into the domain
.
The convergence behaviour should be considered when a given function is expanded in the space spanned by orthogonal polynomials. Due to the fact that the zeros of the Laguerre polynomials and the Hermite polynomials are widely spread in and
, respectively, the Chebyshev and Legendre polynomials have better convergence properties than the Laguerre and Hermite polynomials [Citation31]. Among these orthogonal polynomials, which one to be best for reduction is impacted by a specific problem.
In the following, we briefly present two lemmas for dimension reduction based on general orthogonal polynomials.
Lemma 2.1. ([Citation31]) If for arbitrary polynomial
, where
denotes the degree of the polynomial
, then it has
.
Lemma 2.2. If are orthogonal polynomials, then it has
Proof. The proof of the first equation can be found in [Citation28]. Now we only need to prove the second equation. According to the degree properties of the involved polynomials, we need to prove that the basis functions in are linearly independent. Note that
. If
, then it has
. This implies that the conclusion holds. □
3. Dimension reduction of second-order systems
In this section, we discuss two dimension reduction methods of second-order systems by general orthogonal polynomials: dimension reduction of equivalent first-order systems, and dimension reduction of second-order systems by direct projection.
3.1. Dimension reduction of equivalent first-order systems
In this method, we first transform the second-order system to an equivalent first-order model, and then reduce its dimension by orthogonal polynomials.
Now we present the concrete dimension reduction procedure. First, we expand the state vector and the input variable
of the equivalent system (2) as approximate forms
where ,
(note that
), are vectors of coefficients. Substituting (4) and (3) into (2), leads to
After rearranging the above equation, we get
Comparing the constant terms and the coefficients of the same powers of on both sides, we have
where And by the initial condition
and (4), we have
Then, we can get the following linear equation
where ,
and
The coefficient vectors can be computed by solving the linear Equation (6). Then we can construct the projection matrix
by the modified Gram–Schmidt scheme or the QR decomposition, and it holds that
. Let
. Then, we can get a first-order reduced system:
where and
The dimension reduction algorithm can be stated as shown in Algorithm 1 below.
Algorithm 1.
Dimension reduction algorithm for first-order systems
1: Compute the coefficient matrix
by solving (6);
2: Construct the projection matrix Q such that
and
;
3:Compute the coefficient matrices of the reduced system by
4: Output
3.2. Dimension reduction of second-order systems
Algorithm 1 is used to reduce the equivalent first-order system, and it is a linearization approach. There are two major drawbacks with such an approach: the corresponding linear system (2) has a state space of double dimension which increases memory requirements, and the reduced system (7) is typically linear and the second-order structure of the original system is not preserved. For engineering design and control, it is extremely desirable to have a structure-preserving reduced-order model.
In this section, we apply the orthogonal polynomials to the second-order system directly, and get a structure-preserving dimension reduction algorithm for second-order systems in the time domain.
We expand the state vector and the input variable
of the second-order system (1) as approximate forms
where , and
are vector of coefficients. Substituting (8) and (3) into (1) leads to
Substituting (3) into (9), and using it repeatedly, we have
Comparing the constant terms and the coefficients of the same powers of on both sides, we have
where
and
From the initial conditions
and (8), we have
and
Then, we can get the following linear equation
where ,
and
Solving the linear Equation (11), we can get the coefficient vectors . Then the projection matrix
can be constructed by the QR decomposition or the modified Gram–Schmidt scheme, and we have
. Finally, using the orthogonal projection transformation
where
and
, we obtain the reduced second-order system:
where and
The procedure can be described by Algorithm 2.
Algorithm 2.
Dimension reduction algorithm for second-order systems
1: Compute the coefficient matrix
by solving (11);
2: Construct the projection matrix W such that
and
;
3: Compute the coefficient matrices of the reduced system by
and
4: Output
and
In Algorithm 2, the main computational expense is spent in solving the linear Equation (11) (or (6) in Algorithm 1). A direct method to solve this equation will require flops (n is mN or
). For dense operations, the computational complexities are almost of the same order with balanced truncation method [Citation6]. In general, the work involved in performing operations on a sparse matrix is about
[Citation31], where c denotes the average number of nonzero elements per row/column of this sparse matrix. In practice, it is important to balance the accuracy and computational complexity of the reduced system. And the iterative solvers, such as generalized minimal residual (GMRES) method (with restarts) [Citation32,Citation33] and biconjugate gradients stabilized (BI-CGSTAB) method [Citation34], can be used to solve these equations. From the numerical results in Section 4, for time-domain simulation of test examples, our approach may be superior than other model reduction methods in some situations.
Next, we discuss some properties of the reduced-order model obtained by Algorithm 2, including coefficients matching, error estimation and stability.
3.2.1. Coefficients matching
First, we approximately expand the state variable , the output variable
of the original system [Citation1] and those
of the reduced-order model (13) in the space spanned by orthogonal polynomials as follows:
where and
Lemma 3.1. The coefficients and
in (14) satisfy the following relation:
for
, where W is the projection matrix obtained by Algorithm 2.
Proof. From Algorithm 2, we have that . Then there exist vectors
such that
From (11), we have
, where
and
Premultiplying both sides by
, we obtain
where and
Considering the reduced-order system (13) with the initial conditions
and
, we also have
where ,
and
,
Note that and
. Then, Equations (16) and (17) are equivalent. That is
. Therefore, we have
□
Theorem 3.2. The coefficients and
in (15) satisfy the following relation:
for
Proof. According to (14) and (15), it infers and
By Lemma 3.1 and
, we have
for
□
Theorem 3.2 illustrates that the output variable of the reduced-order system (13) matches the first m expansion coefficients of the output variable
of the original system (1) in the orthogonal polynomial space.
3.2.2. Error and stability analysis
In this section, we take into account the approximation error between the original system (1) and its reduced-order system (13).
First, for arbitrary fixed nonnegative integer k, we approximately expand the state variable , the output variable
of the original system (1) and those
of the reduced-order model (13) as follows:
and has the following approximate expansion:
We compare the approximate output variables with
. Similar to (11), we can obtain the following linear equations:
where
and (see (11)),
Similarly, the coefficients satisfy the following linear equations:
where
and
Theorem 3.3. Let Then it has
(1)
(2)
Proof. The conclusion (1) holds by Lemma 3.1. The proof of the conclusion (2) can be found in [Citation28], and we omit it here. □
Theorem 3.4. For the approximate output variables of the original system (1) and
of the reduced-order model (13), it has
Proof. First, from the conclusion (1) of Theorem 3.3, we have
Further, from the conclusion (2) of Theorem 3.3, it has
Also since and
, then we can obtain
Next, we discuss the stability of the reduced-order system (13).
Lemma 3.5. ([Citation35]) The linear system (2) is stable, if and
Lemma 3.6. ([Citation5]) The second-order system (1) is stable, if ,
and
Theorem 3.7. The reduced-order system (13) is stable, if ,
and
The stability of the reduced-order system (13) is guaranteed by which the symmetry and nonnegativity of the matrices and K are invariant under the congruence transformation W.
4. Numerical examples
In order to demonstrate the accuracy and effectiveness of the proposed algorithms, three test examples are given in this section. All numerical experiments were run in MATLAB (R2008a; The MathWorks, Inc., Natick, MA, USA), and we used ode15s to solve differential equations.
Example 1. Clamped beam model: this system is a second-order model of dimension with one input and one output [Citation7]. The input represents the force applied to the structure at the free end, and the output is the resulting displacement. M is a diagonal matrix, and D and K are dense matrices. In this example, we take the initial conditions be
.
Two kinds of approaches are applied to this system:
Using Algorithm 1, the balanced truncation reduction (BTR) method in [Citation36], and the Krylov subspace method [Citation35] to the equivalent first-order system of order 348;
Using Algorithm 2, the second-order balanced truncation (SOBT) method in [Citation10], and the second-order Arnoldi method (SOAR) in [Citation14] to the original second-order system directly.
(a) shows the transient responses of the equivalent first-order system, and the reduced models obtained by the BTR method, the Arnoldi method and Algorithm 1 based on the Chebyshev polynomials of the first kind, Chebyshev polynomials of the second kind and the Legendre polynomials. The corresponding relative errors are shown in (b). In order to achieve the same accuracy, the order of the reduced model obtained by the Arnoldi method is taken as 40, while the other is taken as 20.
Figure 1. Transient responses (a) and relative errors (b) of the reduced models obtained by Algorithm 1, the BTR method and the Arnoldi method in Example 1.
![Figure 1. Transient responses (a) and relative errors (b) of the reduced models obtained by Algorithm 1, the BTR method and the Arnoldi method in Example 1.](/cms/asset/709044b1-efce-4590-aaac-345375be970c/nmcm_a_867274_f0001_b.gif)
shows the transient responses and relative errors of the reduced models obtained by the SOBT method, the SOAR method, and Algorithm 2 based on the Chebyshev polynomials of the first kind, Chebyshev polynomials of the second kind and the Legendre polynomials. To generate a reduced model of order 26, the SOBT method and Algorithm 2 require flops. And the computational times of obtaining the reduced models are listed in .
Figure 2. Transient responses (a) and relative errors (b) of the reduced models obtained by Algorithm 2, the SOBT method and the SOAR method in Example 1.
![Figure 2. Transient responses (a) and relative errors (b) of the reduced models obtained by Algorithm 2, the SOBT method and the SOAR method in Example 1.](/cms/asset/39f74999-cc78-4c7e-82f2-0ae8e5aadc4d/nmcm_a_867274_f0002_b.gif)
Table 2. Computational times of obtaining reduced models for Example 1.
The simulation results show that Algorithm 1 is effective for the dimension reduction of the second-order system except to preserve the second-order structure; Algorithm 2 not only preserves the second-order structure but also generates a great approximation.
Example 2. International Space Station model (ISS): it is a structural MIMO (multi-input and multi-output) model of component 1r (Russian service module) of the International Space Station (ISS) [Citation7]. This system is a second-order model of dimension with 3 inputs and 3 outputs. The matrices M, D and K are diagonal.
In this example, we use Algorithm 2, the SOBT method in [Citation10], and the SOAR method [Citation14] with the initial conditions . The output
includes three components
,
and
. Our numerical simulation shows that the orthogonal polynomial methods can approximate these components of output very well if the reduced dimension is set to 27. shows the transient responses of the component
of the original system, and the reduced-order systems obtained by the SOBT method, the SOAR method and Algorithm 2 based on the Chebyshev polynomials of the first kind, Chebyshev polynomials of the second kind and the Legendre polynomials, respectively. The relative errors are also shown in this figure. The computational times of obtaining the reduced models are listed in . The simulation results demonstrate that the MIMO second-order reduced models obtained by Algorithm 2 have an accurate approximation.
Figure 3. Transient responses (a) and relative errors (b) of the output of the reduced models obtained by Algorithm 2, the SOBT method and the SOAR method in Example 2.
![Figure 3. Transient responses (a) and relative errors (b) of the output y3(t) of the reduced models obtained by Algorithm 2, the SOBT method and the SOAR method in Example 2.](/cms/asset/9c27ef74-1065-449b-98f3-3e50b388beea/nmcm_a_867274_f0003_b.gif)
Table 3. Computational times of obtaining reduced models for Example 2.
Example 3. We consider a second-order model of dimension , which comes from dynamic analysis in structural engineering. The sparse matrix K is the stiffness matrix and the diagonal matrix M is the mass matrix for the dynamic modeling of structures, respectively, and
, where we take
and
. B and C are both
vectors with all elements equal to one. The data were extracted from http://math.nist.gov/MatrixMarket/data/Harwell-Boeing/bcsstruc1/bcsstk08.html.
In this example, we use the efficient SOBT method proposed in [Citation9], which presented a way to efficiently compute the reduced-order system exploiting the sparsity and second-order structure of the original system. Here, we calculated 20 parameters following the heuristic parameter choice as proposed by Penzl [Citation37]. Numerical simulation shows that the orthogonal polynomial methods perform very well. We compare our algorithm with the SOBT method [Citation9], and the SOAR method [Citation14]. The transient responses and relative errors of five methods including the Chebyshev polynomials of the first kind, the Chebyshev polynomials of the second kind, the Legendre polynomials, the SOBT method and the SOAR method are shown in . The computational times of obtaining the reduced models of order 18 are listed in . The above results illustrate the effectiveness of our method for this example.
Figure 4. Transient responses (a) and relative errors (b) of the reduced models obtained by Algorithm 2, the SOBT method and the SOAR method in Example 3.
![Figure 4. Transient responses (a) and relative errors (b) of the reduced models obtained by Algorithm 2, the SOBT method and the SOAR method in Example 3.](/cms/asset/dd9e5e78-b571-4291-9fe5-1a6561b56a8b/nmcm_a_867274_f0004_b.gif)
Table 4. Computational times of obtaining reduced models for Example 3.
5. Conclusions
In this article, we have considered time-domain dimension reduction methods for second-order systems based on general orthogonal polynomials. And we presented a structure-preserving dimension reduction method for second-order systems. This method expands the state and output variables in the space spanned by orthogonal polynomials, and then a structure-preserving reduced model is obtained by a projection transformation. The resulting reduced model not only preserves the second-order structure but also guarantees the stability under certain conditions. Numerical examples are also given to demonstrate the effectiveness of the proposed methods. In order to make this method more efficient and reliable, it is necessary to derive numerical methods to solve linear equations of dimensions . How to solve these large-scale sparse linear equations effectively will be the focus of our future research.
Funding
The work was supported by the Natural Science Foundation of China [grant number 11071192], [grant number 11371287]; the International Science and Technology Cooperation Program of China [grant number 2010DFA14700].
References
- R.R. Craig Jr, Structural Dynamics: An Introduction to Computer Methods, John Wiley and Sons, New York, 1981.
- R.W. Freund, Padé-type model reduction of second-order and higher-order linear dynamical systems, in Dimension Reduction of Large-Scale Systems, P. Benner, V. Mehrmann, and D. Sorensen, eds., Lecture Notes in Computational Science and Engineering Vol. 45, Springer-Verlag, Berlin, 2005, pp. 191–223.
- J. Lienemann, E.B. Rudnyi, and J.G. Korvink, MST MEMS model order reduction: requirements and benchmarks. Linear Algebra Appl. 415 (2–3) (2006), pp. 469–498.
- Y. Chahlaoui, K.A. Gallivan, A. Vandendorpe, and P. Van Dooren, Model reduction of second-order systems, in Dimension Reduction of Large-Scale Systems, P. Benner, V. Mehrmann, and D. Sorensen, eds., Lecture Notes in Computational Science and Engineering Vol. 45, Springer-Verlag, Berlin, 2005, pp. 149–172.
- B. Salimbahrami, Structure preserving order reduction of large scale second order models, Ph.D. Thesis, Technische Universitaet Muenchen, 2005.
- A.C. Antoulas, Approximation of Large-Scale Dynamical Systems, SIAM, Philadelphia, 2005.
- A.C. Antoulas, D.C. Sorensen, and S. Gugercin, A survey of model reduction methods for large-scale systems. Contemp. Math. 280 (2001), pp. 193–219
- T. Ersal, H.K. Fathy, L.S. Louca, D.G. Rideout, and J.L. Stein, A review of proper modeling techniques. J. Dyn. Syst. Meas. Control 130 (6) (2008), p. 061008.
- P. Benner and J. Saak, Efficient balancing-based MOR for large-scale second-order systems. Math. Comput. Model. Dyn. Syst. 17 (2) (2011), pp. 123–143.
- Y. Chahlaoui, D. Lemonnier, A. Vandendorpe, and P. Van Dooren, Second-order balanced truncation. Linear Algebra Appl. 415 (2–3) (2006), pp. 373–384.
- M. Lehner and P. Eberhard, A two-step approach for model reduction in flexible multibody dynamics. Multibody Syst. Dyn. 17 (2–3) (2007), pp. 157–176.
- D.G. Meyer and S. Srinivasan, Balancing and model reduction for second-order form linear systems. IEEE Trans Autom. Control 41 (11) (1996), pp. 1632–1644.
- T. Reis and T. Stykel, Balanced truncation model reduction of second-order systems. Math. Comput. Model. Dyn. Syst. 14 (5) (2008), pp. 391–406.
- Z.J. Bai and Y.F. Su, Dimension reduction of large-scale second-order dynamical systems via a second-order Arnoldi method. SIAM J. Sci. Comput. 26 (5) (2005), pp. 1692–1709.
- M. Lehner and P. Eberhard, On the use of moment-matching to build reduced order models in flexible multibody dynamics. Multibody Syst. Dyn. 16 (2) (2006), pp. 191–211.
- B. Salimbahrami and B. Lohmann, Order reduction of large scale second-order systems using Krylov subspace methods. Linear Algebra Appl. 415 (2–3) (2006), pp. 385–405.
- S. Gugercin, D.C. Sorensen, and A.C. Antoulas, A modified low-rank Smith method for large-scale Lyapunov equations. Numer. Algorithms 32 (1) (2003), pp. 27–55.
- P. Heydari and M. Pedram, Model-order reduction using variational balanced truncation with spectral shaping, IEEE Trans Circuits Syst I: Regular Papers 53 (2006), pp. 879–891.
- J.R. Li and J. White, Efficient model reduction of interconnect via approximate system Gramians, in Proceedings of the 1999 IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, San Jose, CA, 7–11 November, pp. 380–384.
- C.W. Rowley, Model reduction for fluids, using balanced proper orthogonal decomposition. Internat. J. Bifur. Chaos 15 (3) (2005), pp. 997–1013.
- D.C. Sorensen and A.C. Antoulas, The Sylvester equation and approximate balanced reduction. Linear Algebra Appl. 351 (2002), pp. 671–700
- C.A. Beattie and S. Gugercin, Krylov-based minimization for optimal H2 model reduction, in 46th IEEE Conference on Decision and Control, New Orleans, LA, 12–14 December 2007, pp. 4385–4390.
- J.M. Wang, C.C. Chu, Q.J. Yu, and E.S. Kuh, On projection-based algorithms for model-order reduction of interconnects. IEEE Trans Circuits Syst. I: Fundam. Theory Appl. 49 (11) (2002), pp. 1563–1585.
- J.M. Wang, Q.J. Yu, and E.S. Kuh, Passive model order reduction algorithm based on Chebyshev expansion of impulse response of interconnect networks, in Proceedings of the 37th Annual Design Automation Conference, Los Angeles, CA, June 2000, pp. 520–525.
- Y. Chen, V. Balakrishnan, C. Koh, and K. Roy, Model reduction in the time-domain using Laguerre polynomials and Krylov methods, in Proceeding of the 2002 Design, Automation and Test in Europe Conference and Exposition, Paris, 4–8 March 2002, pp. 931–935.
- L. Knockaert and D. De Zutter, Laguerre-SVD reduced-order modeling. IEEE Trans. Microw. Theory Tech. 48 (9) (2000), pp. 1469–1475.
- L. Knockaert and D. De Zutter, Stable Laguerre-SVD reduced-order modeling. IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 50 (4) (2003), 576–579.
- Y.L. Jiang and H.B. Chen, Time domain model order reduction of general orthogonal polynomials for linear input-output systems. IEEE Trans. Automat. Control 57 (2) (2012), pp. 330–343.
- Y.L. Jiang and H.B. Chen, Application of general orthogonal polynomials to fast simulation of nonlinear descriptor systems through piecewise-linear approximation. IEEE Trans. Comput. Aided Design Integrated Circuits Syst. 31 (5) (2012), pp. 804–808.
- S.C. Tsay and T.T. Lee, Analysis and optimal control of linear time-varying systems via general orthogonal polynomials. Internat. J. Systems Sci. 18 (8) (1987), pp. 1579–1594.
- G. Szegö, Orthogonal Polynomials, American Mathematical Society, New York, 1939.
- Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM Publications, Philadelphia, PA, 2003.
- Y. Saad and M.H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Statist. Comput. 7 (3) (1986), pp. 856–869.
- H.A. Van der Vorst and B. Bi-CGSTA, A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems. SIAM J. Sci. Statist. Comput. 13 (2) (1992), pp. 631–644.
- R.W. Freund, Krylov-subspace methods for reduced-order modeling in circuit simulation. J. Comput. Appl. Math. 123 (1–2) (2000), pp. 395–421.
- S. Gugercin and A.C. Antoulas, A survey of model reduction by balanced truncation and some new results. Internat. J. Control. 77 (8) (2004), pp. 748–766.
- T. Penzl, A cyclic low-rank Smith method for large sparse Lyapunov equations. SIAM J. Sci. Comput. 21 (4) (2000), pp. 1401–1418.