ABSTRACT
In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic–stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.
Acknowledgments
The authors would like to thank the anonymous reviewers for their constructive criticism during the review process.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1. In reality, it is a one-to-many relationship, but all realisations are related by a similarity transformation.
2. Canonical Variate Analysis (CVA) is an algorithm developed by Larimore (Citation1983), which is computationally equivalent to CCA.
3. In order to avoid an abuse of notation when there is no time index involved, and for lack of better words, we still use the terms past and future as are commonly used in time domain subspace system identification.
4. When the system is deterministic, since and rank{Σff|u} = nh, thus not full rank, it is best to use the SVD for the square roots in (Equation33a
(33a)
(33a) )–(Equation33b
(33b)
(33b) ). In the deterministic–stochastic case, the ranks are
and rank{Σff|u} = nyi.
5. There is a more accurate way of doing this for CCA, which we show in Appendix 1.
6. MATLAB is a registered trademark of The Math Works, Inc.
7. Note that we use (•) to indicate a variable row dimension, which depends on the method according to (Equation38a(38a)
(38a) )–(Equation38b
(38b)
(38b) ).
8. The block vec operator, vecm, takes an (n × mp) matrix with p blocks, each of size (n × m) and converts it into an (np × m) matrix.
9. There is a similarity invariance relation such as since both are permutation matrices composed of {0, 1} elements.
10. In Ramos et al. (Citation2011), the authors extracted the vertical Markov parameters hvk = C2Ak − 14B2, for k = 1, 2,… , M from the first ny rows of and formed a Hankel matrix they called
, then computed its SVD to find the vertical parameters. We believe this approach is not as accurate as the new approach presented here because it contains information from the vertical direction only and not from both directions as we propose here.
11. Despite the fact that is symmetric, it may not be positive definite, therefore may not have a Cholesky factorisation.