314
Views
0
CrossRef citations to date
0
Altmetric
Original Article

Block Krylov subspace methods for large-scale matrix computations in controlFootnote

&

Abstract

In this paper we show how to improve the approximate solution of the large Sylvester equation obtained by an arbitrary method. Such problems appear in many areas of control theory such as the computation of Hankel singular values, model reduction algorithms and others. Moreover, we propose a new method based on refinement process and weighted block Arnoldi algorithm for solving large Sylvester matrix equation. The numerical tests report the effectiveness of these methods.

1 Introduction

An important class of linear matrix equations is the Sylvester equation

(1.1) XA+BX=C.(1.1) Since the fundamental work of Sylvester on the stability of the motion, these matrix equations have been widely used in stability theory of differential equations and play an important role in control and communications theory [Citation1Citation5]. For small problems, direct methods for solving the Sylvester equation are attractive. The standard methods are based on the Hessenberg–Schur decomposition to transform the original equation into a form that can be easily solved. Iterative methods for solving large Sylvester matrix equations have been developed during the past years [Citation6Citation8]. A class of classical methods known as the Krylov subspace methods, that include the block Arnoldi and weighted block Arnoldi, etc. have been found to be suitable for sparse matrix computations. In this paper, we extend the idea to propose a new projection method for solving Equation(1.1) based on weighted block Krylov subspace method. The paper is organized as follows. In Section 2, we introduce the main minimal residual algorithm for iteratively solving the Sylvester matrix equation Equation(1.1). Several numerical examples presented in Section 3. Finally, the conclusion is given in the last section.

2 Block refinement Arnoldi method

In this section we propose to show that the obtained approximate solution of the Sylvester equation by any method can be improved, in other words the accuracy can be increased. If this idea is applicable then we have found an iterative method for solving of the Sylvester equation. Therefore let the basis Vm=[v1,,vm] and Wm=[w1,,wm] constructed by the block Arnoldi process, thus we have

VmTVm=ImWmTWm=ImThe square block Hessenberg matrices Hm and Hˆm (m = r * l where r and l are the dimensions of blocks) whose nonzero entries are the scalars hij and hˆij, constructed by the block Arnoldi process can be expressed as

Hm=VmTATVmHˆm=WmTBWmLet X(0) be an initial approximate solution of the Sylvester equation XA + BX = C where A, B, C, X(0), ∈ Rn×n.

Also introduce the residual matrix

R(0)=C(X(0)A+BX(0))And let Ym ∈ Rm×m be the solution of the Sylvester equation:

(2.1) YmHmT+HˆmYm=WmTR(0)Vm(2.1) If set

(2.2) X(1)=X(0)+WmYmVmT(2.2) then the corresponding residual R(1) = C − (X(1)A + BX(1)) satisfies:

R(1)=C((X(0)+WmYmVmT)A+B(X(0)+WmYmVmT))=R(0)WmYmVmTABWmYmVmT=R(0)WmYmHmTVmTWmHˆmYmVmT=R(0)Wm(YmHmT+HˆmYm)VmTSince Ym is the solution of Equation(2.1) we have:

R(1)=R(0)WmWmTR(0)VmVmT=0According to the above results we can develop an iterative method for the solving of the Sylvester equation when the matrices A, B and C are large and sparse. For doing this idea if we choose m < n, then instead of solving XA + BX = C we can solve Equation(2.1). In other words in this method, first we transform the initial Sylvester equation to another Sylvester equation with less dimensions, then in each iteration step solve this matrix equation and extend the obtained solution to the solution of initial equation by Equation(2.2). The algorithm is as follows:

Algorithm 1

Block refinement Arnoldi method (BRA)

Choose an initial solution X(0), and a tolerance ϵ and select two numbers r and l for dimensions of block and set m = r * l(m < n).

3 Numerical examples

In this section, we present some numerical examples to illustrate the effectiveness of algorithms described in this paper for large and sparse Sylvester equations. All numerical tests are performed in MATLAB software on a PC with 2.20 GHz with main memory 2 GB.

Example 3.1

Consider the Sylvester equation XA + BX = C with n = 100. We apply the Iterative block refinement Arnoldi method for solving this matrix equation and take ϵ = 10−6. In , we report the results for different values of m.

A=101.2.42.82.3.8001.8101.2.42.82.3.8001.61.801.641.6.801.31.642.3.81.611.3.82.301.61.42.801.2.4201.611.31.641.61.8101.20001.611.31.641.61.810n×nB=102.1.38.71.5.4001.21102.1.38.71.5.4001.91.210.641.9.401.9.641.5.4.871.9.71.50.87.38.702.1.380.871.9.641.91.21102.1000.871.9.641.91.2110n×nC=.12.211.41.5.132.62001.3.12.211.41.5.132.62002.61.301.72.62.6202.31.7.132.622.62.31.5.1302.61.41.502.211.402.62.31.72.61.3.12.210002.62.31.72.61.3.1n×n

Table 1 Implementation of iterative BRA method to solve the Sylvester equation with different values of m.

Example 3.2

Now consider A and B and C are the same matrices that used in Example 3.1. We apply two Iterative methods and Hessenberg–Schur method to solve the Sylvester equation when the dimension of the matrices are large. Results are shown in .

Table 2 Implementation of new Iterative methods and Hessenberg–Schur method for solving the Sylvester equation.

4 Comments and conclusion

In this paper, we introduced a new method for computing low-rank approximate solutions to large Sylvester matrix equations. Moreover Refinement process presented in Section 2 has the capability of improving the results obtained by an arbitrary method. For example in this paper we apply the refinement process with Hessenberg–Schur method. The experiments illustrate the advantages of the method for large sparse matrices.

Notes

Peer review under responsibility of Taibah University.

References

  • B.N.DattaNumerical Methods for Linear Control Systems: Design and Analysis2004Elsevier Academic PressAmsterdam
  • B.N.DattaK.DattaTheoretical and computational aspects of some linear algebra problems in control theoryC.I.ByrnesA.LindquistComputational and Combinatorial Methods in Systems Theory1986ElsevierAmsterdam201212
  • N.J.HighamAccuracy and Stability of Numerical Algorithmssecond ed.2002SIAMPhiladelphia, PA
  • C.HylandD.BernsteinThe optimal projection equations for fixed-order dynamic compensationIEEE Trans. Autom. Control29198410341037
  • R.H.BartelsG.W.StewartSolution of the matrix equation AX + XB = CCommun. ACM151972820826
  • G.H.GolubS.NashC.F.Van LoanA Hessenberg–Schur method for the problem AX + XB = CIEEE Trans. Autom. Control241979909913
  • G.H.GolubC.F.Van LoanMatrix Computationsthird ed.1996Johns Hopkins U.P.Baltimore
  • A.El.GuennouniK.JbilouA.J.RiquetBlock Krylov subspace methods for solving large Sylvester equationsNumer. Alg.2920027596