387
Views
4
CrossRef citations to date
0
Altmetric
Original Articles

Identification of linear time-invariant systems based on initial condition responses

&
Pages 217-226 | Received 24 Oct 2008, Accepted 01 Jun 2009, Published online: 26 Feb 2010

Abstract

Forced responses are commonly used for system identification, but only initial condition responses may be available when a system has no controllable inputs or forced responses cannot be practically measured. It is therefore meaningful to analyse whether initial condition responses can be used to estimate the system matrix. In this work, the identifiability of the nth order (n > 1) linear time-invariant systems from initial condition responses is analysed. Systems with only one measurable state (OMS systems) and those with n measurable states (NMS systems) are considered. Analysis indicates that one initial condition response is not sufficient to uniquely determine the system matrix of an OMS system and n initial condition responses are necessary. The identifiability of an OMS system with data from n independent initial condition responses is equivalent to that of an NMS system with only one initial condition response. Explicit formulations and a non-iterative algorithm are developed for both OMS and NMS systems.

1. Introduction

Mathematical models are widely used in many areas Citation1–5. Various methods are available for model parameter estimation, including adaptive or learning methods Citation6, least-squares estimators Citation5, the Kalman filter Citation7 and extended Kalman filter Citation8. In spite of the method used, identifiability analysis is important to determine if a model can be uniquely identified Citation9–12. Orlov et al. Citation12 investigated the identifiability of linear time-delay systems and found that the transfer function was identifiable online if a sufficiently non-smooth input signal was applied. Vidal et al. Citation13 analysed the identifiability of jumped linear systems. It is widely accepted that rich inputs are critical for parameter identification Citation14,Citation15.

Most existing work has focused on forced responses and paid less attention to initial condition responses. However, a variety of applications such as chemical reactions and diffusions involve processes that have no controllable inputs Citation16 or the forced responses are not measurable Citation17. It is therefore very meaningful to probe the usefulness of initial condition responses in parameter identification. Tadi and Cai Citation16 developed an iterative algorithm and Wagner Citation18 used a matrix logarithm to identify system parameters from initial condition responses. These methods, however, require all state variables to be measurable. In reality, it is common that only a part of the state variables are measurable Citation17. One can easily list numerous examples where sensors for measuring certain variables are either unavailable or impractical for in situ applications. The extreme case would be a system with only one measurable state (OMS). The identification of such systems has rarely been discussed. In this research, the identifiability and identification of the nth order (n > 1) linear time-invariant (LTI) systems () are investigated. Both systems with only OMS and those with n measurable states (NMS systems) are considered.

2. Identifiability and identification of system matrix

Consider an LTI system given by (1) where, x is a vector of state variables, x(0) is an initial condition vector and A is a full-rank system matrix with n distinct eigenvalues ().

If D is the diagonal eigenvalue matrix of A, P is an eigenvector matrix of A and Q is the inverse of P, then (2) It is clear from Equation (2) that if D and P (or Q) are known, A is determined. As D can be estimated from initial condition responses by methods such as the eigensystem realisation algorithm (ERA) Citation19, estimation of A boils down to estimating eigenvector matrix P (or Q). The initial condition response of the system (Equation (1)) is (3) Equation (3) defines a relationship between the initial condition responses and the eigenvectors, which is useful for eigenvector estimation.

2.2. Identifiability and identification of OMS systems

Assume that the ith state variable, xi is measurable. From Equation (3), (4) where () is the coefficient for component . Let (5) If for all j (), the inverse of matrix V is (6) If for any j (), state xi would not include mode or have a projection in the direction of the corresponding eigenvector; and thus, measurements of xi alone would not be sufficient to determine all the eigenvectors or A.

It is important to note that U (or V) is another eigenvector matrix of A since each column vector of U (or row vector of V) is just a multiple of the corresponding column vector of P (or row vector of Q). Therefore, A can be reconstructed as A = UDV, if U or V can be estimated.

When the system eigenvalues are available, coefficient vector has a linear relationship with measurement xi Equation (4) and can thus be estimated from initial condition response xi. Also, according to Equation (4), (7) Equation (7) shows that with only one initial condition response (thus one initial condition vector and one estimated coefficient vector ), eigenvector matrix V and thus system matrix A cannot be uniquely determined. This is obvious and can be easily illustrated. For example, the following two second-order systems: both produce the following x2 response to initial condition (80, 300)T: (8) In other words, the measurement of x2 for one initial condition is insufficient to determine A.

If the experiment is repeated n times with a linearly independent initial condition vector each time, eigenvector matrix V becomes solvable as (9) where is the initial condition for the ith state variable for the jth experiment and is the coefficient of for the jth experiment. With the knowledge of D, determination of V leads to the determination of A as discussed earlier.

2.1. Identifiability and identification of NMS systems

If all the state variables are measurable, system matrix A can be estimated by the method of Tadi and Cai Citation16 or that of Wagner Citation18. The usefulness of these methods indicates that measured responses of n states to one initial condition vector can be sufficient to determine system matrix A. This observed identifiability of NMS systems can be easily verified by the procedures used in this work, which will yield an alternative and non-iterative method for reconstructing system matrix A. According to Equation (3), (10) In the elements of U in Equation (10), () are non-zero constants; otherwise, mode would not be included and the system order (or the rank of A) should be reduced. Because each column vector of U is a constant multiple of the corresponding column vector of P in Equation (2), U is also an eigenvector matrix of A. Let V = U−1, then A can be reconstructed as A = UDV. This establishes that one initial condition response in theory is sufficient to determine the system matrix for a full-rank NMS system. Equation (10) can be used to estimate the eigenvector matrix and consequently system matrix A from measurements.

3. Implementation algorithm

The results presented in the previous section can be implemented as an algorithm for system matrix estimation from initial condition responses with the following major steps.

Step 1:

Estimating system eigenvalues from initial condition responses.

For a real application, the Hankel matrix can be constructed by using the initial condition responses, and the eigensystem realisation algorithm Citation19 can be used to compute the system eigenvalues. The discrete-time state space equations can be represented as (11) where k is the time index of sampled data, x is the state vector and Ad is the system matrix in discrete-time form. For an NMS system, the Hankel matrix can be directly constructed as follows Citation19: (12) where, , s and r are the numbers (> n) of row blocks and column blocks of the Hankel matrix.

For an OMS system, the multiple initial condition responses can be re-arranged into the following system with only one combined initial condition vector: (13) where is the system state vector at the kth sampling point of the jth experiment. Obviously, the eigenvalues of Ad are included in the eigenvalues of this newly-formed system in Equation (13) except for different multiplicities. According to Juang and Pappa Citation19, the data forming the OMS system Hankel matrix can be the vector of or its transpose, where is the kth data point of the ith measured state variable from the jth initial condition–response experiment.

By singular value decomposition (SVD), , where UH is a left singular vector matrix, VH is a right singular vector matrix and Σ is a diagonal singular value matrix of H(0). One realisation for the discrete-time system matrix in Equation (11) or (13) is (14) Detailed explanations about the Hankel matrix and ERA can be found in various references Citation19–22. If is an eigenvalue of Ad, the corresponding eigenvalue for the continuous-time system matrix A is (15) where is the sampling interval and .

Step 2:

Computing eigenvector matrix.

For an OMS system, the jth () initial condition responses gives (16) where N > n − 1 is the number of sampled data points. Equation (16) can be solved for the coefficient vector . With n coefficient vectors solved from n initial-condition responses, an eigenvector matrix can be determined by Equation (9). N = n − 1 is theoretically sufficient for solving Equation (16), but in practice, a larger N is preferred. N > n − 1 will lead to an over-determined system and a least-squares solution for Equation (16), which will help reduce the influence of noise in the measurements.

For an NMS system, the elements of eigenvector matrix (17) where j is the index of state variables.

Step 3:

Constructing the system matrix.

Compute the inverse of the eigenvector matrix determined in Step 2, or . The system matrix A can be uniquely reconstructed as (18) where D is the diagonal eigenvalue matrix of A from Step 1.

4. Illustrative examples

4.1. System with OMS

Assume that the system matrix for a third-order system is Also assume that the third state variable is measurable and three initial condition vectors are given by where each row vector is the initial condition for one experiment. The Runge–Kutta algorithm was used to generate the initial condition responses.

Step 1:

By using the ERA, the system eigenvalues computed from the initial condition responses are

Step 2:

The coefficients for are computed from Equation (16) as According to Equation (9), eigenvector matrix V is found as

Step 3:

Finally the system matrix is Obviously, the exact A matrix is recovered from the initial condition responses.

4.2. System with n measurable state (NMS)

Use the same system as the one used in the OMS example above and assume that the initial condition vector is Again, the Runge–Kutta algorithm was used to generate the initial condition response.

Step 1:

By using the ERA, the estimated system eigenvalues are

Step 2:

By Equation (17), the estimated eigenvector matrix U is

Step 3:

The estimated system matrix is which again is precisely the true A matrix.

5. Discussions

As analysed in Section 2, it is impossible to identify the n2 parameters of an OMS system by using only data of one initial condition response. With additional constraints introduced by n independent initial condition responses, however, an OMS system becomes identifiable. Different from OMS systems, NMS systems are identifiable with only one initial condition response. Actually, initial condition responses of LTI systems are the sum of multiple exponential functions. The responses span an n-dimensional space or have n different eigenvalues. Each response has n − 1 independent coefficients for , the sum of which is equal to the initial condition. The number of the independent variables that define these multiple exponential responses is k = n + (n − 1) × p, where n is the order of the system and p is the number of responses. When p = n, k = n2. From this viewpoint, the identifiability of OMS systems with n initial condition responses is equivalent to NMS systems with one initial condition response.

For some systems or processes, one or more state variables may be continually measured at short sampling intervals while the other state variables may only be measured much less frequently. Under these conditions, there is no need to repeat the experiment n times, since response data starting from each time point at which all state variables are measured form a new initial condition response.

The algorithm put forth in Section 3 is non-iterative. As a result, the computational speed is high and the algorithm does not suffer from local-minimum issues. With noise, the ERA may give some additional eigenvalues, which are usually very small or have unreasonable damping coefficients. Stabilisation diagrams can usually be used to differentiate them from the real system eigenvalues Citation23,Citation24. Other techniques can also be used to estimate system eigenvalues Citation25–29. Furthermore, the derivations are based on the assumption of distinct eigenvalues. If two estimated eigenvalues are close but resolvable by the measured data and the estimation algorithm, they are unlikely to cause problems for the solution of Equations (16) and (17). Nonetheless, this needs to be further analysed. Overall, the numerical stability of the algorithm needs further investigation.

6. Conclusion

The identifiability of the system matrix from initial condition responses is analysed for systems with OMS or NMS. One initial condition response is insufficient to determine the n2 (n > 1) unknown parameters of OMS systems. OMS systems require data of n initial condition responses to determine the system matrices, while NMS systems need only one response. With the help of the eigensystem realisation algorithm, the formulations developed in this research might be used to determine the system matrices from initial condition responses for both OMS and NMS systems. The numerical stability of the algorithm under noise conditions needs further investigation.

References

  • Steiner, G, and Bernhard, S, 2006. Parameter identification for a complex lead-acid battery model by combining fuzzy control and stochastic optimization, Inverse Prob. Sci. Eng. 14 (2006), pp. 665–685.
  • Shatalov, YS, Lukashuk, SY, and Rikachev, YY, 1999. The problem of coefficients identification in the mathematical model of the ion implantation diffusion process, Inverse Prob. Sci. Eng. 7 (1999), pp. 267–290.
  • Davison, EJ, 1966. A method for simplifying linear dynamic systems, IEEE Trans. Automat. Control 11 (1966), pp. 93–101.
  • Juang, JN, 1997. State-space System Realization with Input and Output Data Correlation. Hampton, VA: National Aeronautics and Space Administration; 1997.
  • Walter, E, and Pronzato, L, 1997. Identification of Parametric Models. New York: Springer; 1997.
  • Nagumo, JI, and Noda, A, 1967. A learning method for system identification, IEEE Trans. Automat. Control 12 (1967), pp. 282–287.
  • Kalman, RE, 1960. A new approach to linear filtering and prediction problems, Trans. ASME J. Basic Eng. 82 (1960), pp. 35–45.
  • Ljung, L, 1979. Asymptotic behavior of the extended Kalman filter as a parameter estimator for linear systems, IEEE Trans. Autom. Contr. 24 (1979), pp. 36–50.
  • Glonek, GFV, 1999. On identifiability in models for incomplete binary data, Stat. Prob. Lett. 41 (1999), pp. 191–197.
  • Grewal, M, and Glover, K, 1976. Identifiability of linear and nonlinear dynamical systems, IEEE Trans. Autom. Control 21 (1976), pp. 833–837.
  • Harrison, KJ, Partington, JR, and Ward, JA, 2002. Input-output identifiability of continuous-time linear systems, J. Complex 18 (2002), pp. 210–223.
  • Orlov, Y, Belkoura, L, Richard, JP, and Dambrine, M, 2002. On identifiability of linear time-delay systems, IEEE Trans. Autom. Contr. 47 (2002), pp. 1319–1324.
  • Vidal, R, Chiuso, A, and Soatto, S, 2002. Observability and identifiability of jump linear systems. Las Vegas, NV: Proceedings of the 41st IEEE Conference on Decision Control; 2002. pp. 3614–3619.
  • Belkoura, L, 2005. Identifiability of systems described by convolution equations, Automatica 41 (2005), pp. 505–512.
  • Mehra, RK, 1974. Optimal inputs for linear system identification, IEEE Trans. Autom. Control 19 (1974), pp. 192–200.
  • Tadi, M, and Cai, W, 2001. Inverse matrix evaluation for linear systems, Inverse Prob. 17 (2001), pp. 247–260.
  • Guo, Y, and Tan, J, 2009. A kinetic model structure for delayed fluorescence from plants, Biosystems 95 (2009), pp. 98–103.
  • Wagner, N, 2002. "Use of matrix logarithms in system identification". In: Mang, HA, Rammerstorfer, FG, and Eberhardsteiner, J, eds. Fifth World Congress on Computational Mechanics. Austria: Vienna; 2002. pp. 1–10.
  • Juang, JN, and Pappa, RS, 1985. An eigensystem realization algorithm for modal parameter identification and model reduction, J. Guides Control Dyn. 8 (1985), pp. 620–627.
  • Bazan, FSV, 2004. Eigensystem realization algorithm (ERA): Reformulation and system pole perturbation analysis, J. Sound Vib. 274 (2004), pp. 433–444.
  • Caicedo, JM, Marulanda, J, Thomson, P, and Dyke, S, 2001. Monitoring of bridges to detect changes in structural health. Arlington, VA: Proceedings of the American Control Conference; 2001.
  • Dyke, SJ, Caicedo, JM, and Johnson, EA, 2000. Monitoring of a benchmark structure for damage identification. Austin, TX: Proceedings of the Engineering Mechanics Specialty Conference; 2000.
  • Iwaniec, J, 2006. Novel approach to modal model reduction by means of the balanced realization method part 2-applications, Mol. Quantum Acoust. 27 (2006), pp. 119–132.
  • Peeters, B, and Roeck, GD, 2001. Stochastic system identification for operational modal analysis: A review, J. Dyn. Syst. Meas. Control 123 (2001), pp. 659–667.
  • Chen, H, Van Huffel, S, and Vandewalle, J, 1997. Improved methods for exponential parameter estimation in the presence of known poles and noise, IEEE Trans. Signal Proc. 45 (1997), pp. 1390–1393.
  • Holmstrom, K, and Petersson, J, 2002. A review of the parameter estimation problem of fitting positive exponential sums to empirical data, Appl. Math. Comput. 126 (2002), pp. 31–61.
  • Morren, G, Lemmerling, P, and Van Huffel, S, 2003. Decimative subspace-based parameter estimation techniques, Signal Process. 83 (2003), pp. 1025–1033.
  • Papy, JM, De Lathauwer, L, and Van Huffel, S, 2005. Exponential data fitting using multilinear algebra: The single-channel and multi-channel case, Numer. Linear Algebra Appl., 12 (2005), pp. 809–826.
  • Papy, JM, De Lathauwer, L, and Van Huffel, S, 2006. Common pole estimation in multi-channel exponential data modeling, Signal Process. 86 (2006), pp. 846–858.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.