565
Views
0
CrossRef citations to date
0
Altmetric
Articles

Statistical accuracy analysis for virtual reference feedback tuning with two degrees of freedom controllers*

, &
Pages 462-476 | Received 20 Mar 2020, Accepted 07 Jul 2020, Published online: 23 Jul 2020

Abstract

Virtual reference feedback tuning is one of data-driven control method, as only input–output data are used to design two degrees of freedom controller directly, i.e. feed-forward and feedback controller. Given two desired closed-loop transfer functions, virtual input and virtual disturbance are constructed, respectively to obtain one cost function, whose decision variables correspond to the two unknown controller parameters. Then after formulating the cost function as a linear regression form, the classical least squares algorithm is applied to identify the unknown parameter estimators. To quantify the quality or approximation error of the parameter estimators, statistical accuracy analysis is studied for virtual reference feedback tuning control. The detailed computational process about the asymptotic variance matrix is given by using some knowledge from probability theory and matrix theory. Based on our obtained asymptotic variance matrix, two applications of optimal input design and model structure validation are used to illustrate how to use the obtained asymptotic variance matrix. Finally, one simulation example has been performed to demonstrate the effectiveness of the theories proposed in this paper.

1. Introduction

For many industrial production processes, safety and production restrictions are often strong reasons for not allowing controller design in an open-loop system. In such situations, it is urgent to design controllers under the closed-loop conditions. When designing controllers in the closed-loop system, only experimental data can be collected about the whole closed-loop system. The main difficulty of designing controller in a closed-loop system is that the researcher must consider the correlation between the reference input and external disturbance, induced by the feedback loop. Consider one problem about how to design controllers in a closed-loop system, a mathematical model about the considered plant must be known in prior. However, the mathematical description or model of the considered plant can not be easily determined in the industrial process, such as paper production factory and other chemical factories.

Virtual reference feedback tuning control is a data-driven control method, where the model identification process can be neglected to simplify the whole design process. In engineering, the classical PID controller is always used to achieve the desired goal. The PID controller is a parametrized controller, so this special parametrized structure is considered in this paper. If the expected controller is parametrized by one unknown parameter vector, the main merits of virtual reference feedback tuning control are to tune the unknown parameter vector in the controller by using only input–output measured data without any knowledge of plant model. So in the idea of virtual reference feedback tuning control, the process of the designing controller is transformed into tuning the unknown parameter vector.

The cost of developing the mathematical model is very high. To avoid the identification process for the plant, one of direct data-driven method-virtual reference feedback tuning control was proposed to design the controller in 1940s (Guardabassi & Savaresi, Citation2012). The main essence of virtual reference feedback tuning control is that the controller can be designed directly by measured data without any priori knowledge about the plant model. In Bravo et al. (Citation2006), the problem of designing the controller was transformed to identify one unknown parameter vector of the controller. Then the idea of virtual reference feedback tuning control was applied in a benchmark problem (Bravo et al., Citation2016), where a control cost of the 2-norm type was minimized by using a batch of data collected from the plant. Virtual reference feedback tuning control for controller tuning in a nonlinear setup was introduced in Bravo et al. (Citation2017). In Tanaskovic et al. (Citation2017), the extension of virtual reference feedback tuning control to the the design of two degrees of freedom controller was presented to shape the input–output transfer function of the closed-loop system. As the direct data-driven control approach consists of iterative adjustment of the controller's parameters towards the parameter values, the H2 performance criterion is analysed in order to characterize and enlarge the set of initial parameter values (Campi et al., Citation2002). The contribution of Campi et al. (Citation2005) introduces an invalidation test step based on the available data to check if the flexibility of the controller parameter is suitable for the design objectives. In recent years, many papers on virtual reference feedback tuning control are published, for example, in Jianhong (Citation2012), the model-matching problems between the closed-loop transfer function and sensitivity function were considered simultaneously, and the iterative sequence generated by the iterative least squares identification algorithm is analysed for identifying the unknown parameter vector. Virtual reference feedback tuning control for constrained closed-loop system was studied in Jianhong (Citation2014b), where the ellipsoid optimization iterative algorithm was adopted to generate a sequence of ellipsoids with decreasing volume. To compromise the gap between the linear and nonlinear controller, the problem of designing a linear time varying controller was solved by using virtual reference feedback tuning control (Jianhong, 2014a). Then through minimizing one approximation error, the adjustable weights in the linear affine function is given in Jianhong (Citation2013). In Jianhong (Citation2017), virtual reference feedback tuning control is combined into internal model control strategy so that the process of identify the plant is avoided in internal model control. In this paper, we do not consider how to design the controller by using virtual reference feedback tuning control, but here we study the finite sample properties of virtual reference feedback tuning with two degrees of freedom control based on a quadratic criterion applied to a simple closed-loop system structure. It is well known that finite sample properties quantitatively assess the discrepancy between minimizing a theoretical identification cost and its empirical counterpart when only a finite number of data points is available (Bazanella et al., Citation2008). The model quality with many data points is assessed by applying the asymptotic theory under linear model identification condition (Sala & Esparza, Citation2005). But these asymptotic results hold only when the number of data points tends to infinity (Campi & Savaresi, Citation2006). Due to the advantage of no identification process, then virtual reference feedback tuning control is more widely applied in engineering and theory research now.

Here in this paper, based on above mentioned references and our previous published contributions (Jianhong, Citation2012Citation2013Citation2014aCitation2014bCitation2017Citation2019), we continue to do the deep research on this direct data-driven control strategy-virtual reference feedback tuning control.As in the commonly used closed-loop system, the feed-forward controller and feedback controller exist simultaneously, and these two controllers correspond to the closed-loop system with two degrees of freedom controllers. There are lots of references on designing these two unknown controllers, for the sake of completeness, here classical model reference control and virtual reference feedback tuning control are reviewed. Through comparing the principles between these two strategies, no any priori knowledge about the considered plant is needed in our virtual reference feedback tuning control. Furthermore after constructing the virtual input and virtual disturbance, we find the problem of designing unknown controllers can be changed into one identification problem, whose decision variables are two unknown parameter vectors. Consider the identification with respect to these two unknown parameter vectors, lots of identification algorithms can be applied directly from references, for example, least squares method, separable gradient method, and parallel distributed method, etc. After these two unknown parameter vectors are identified, we need to testify whether our obtained parameter estimators are biased or asymptotic efficient. The whole process to achieve the above goal is named as statistical accuracy analysis for virtual reference feedback tuning control, i.e. probability theory and matrix theory are applied to construct the asymptotic variance matrix for those two unknown parameter vectors. The two diagonal sub-matrices from the asymptotic variance matrix correspond to the asymptotic variance expressions for the unknown parameter vectors. Here we give a more detailed derivation for this complex asymptotic variance matrix through our own mathematical derivations. Based on this obtained asymptotic variance matrix, optimal input design and model structure validation for closed-loop system can be studied. Then the problems about optimal input design and model structure validation are dependent of the obtained asymptotic variance matrix. Generally to quantify the quality of the parameter estimators, statistical accuracy analysis is studied for virtual reference feedback tuning with two degrees of freedom controllers, and the computational process about the asymptotic variance matrix is given more specifically. This asymptotic variance matrix can describe the approximation error between the true value and its estimator. Also from the point of further research on system identification and data-driven control, this obtained asymptotic variance matrix is beneficial for other subjects, such as optimal input design and model structure validation, etc. To pave the road for these other subjects, here some tedious mathematical derivations are given to get that important asymptotic variance matrix. The main contributions of this paper are to derive the asymptotic variance matrix, corresponding to those two unknown controller parameters, and apply this asymptotic variance matrix to optimal input design and model structure validation. To the best of our knowledge that statistical accuracy analysis for virtual reference feedback tuning control is a first work in control research, due to its very strong mathematical basis.

Contributions: Here in this paper, generally the main contributions of this paper are formulated as follow.

  1. Derive the asymptotic variance matrix, corresponding to those two unknown controller parameters.

  2. Apply this asymptotic variance matrix to optimal input design and model structure validation.

  3. Virtual input and virtual disturbance are constructed, respectively to obtain one cost function, which corresponds to two model-matching problems.

  4. The classical least squares algorithm is applied to identify the unknown controller parameters on the condition of transforming that cost function as its simplified linear regression form.

The paper is organized as follows. In Section 2, the problem formulation is addressed, and the structure of our considered closed-loop system with two degrees of freedom controllers is presented. To compare virtual reference feedback tuning control and classical model reference control, a short introduction of classical model reference control is given to design the two unknown parametrized controllers in Section 3, and the deficiency of classical model reference control is also pointed out. In Section 4, virtual reference feedback tuning control is proposed to design these two parametrized controllers, furthermore virtual input and virtual disturbance are constructed to get an optimization problem. The simple least squares method and its recursive form are proposed to identify the two unknown parameter vectors in Section 5, and also the detailed process of how to formulate that optimization problem to its simplified form is given. The main contribution about statistical accuracy analysis for virtual reference feedback tuning control is studied in Section 6, some knowledge from probability theory and matrix theory are combined to obtain that important asymptotic variance matrix. Based on this obtained asymptotic variance matrix, its application in optimal input design and model structure validation are described in Section 7. In Section 8, one simulation example illustrates the effectiveness of the proposed theories. Section 9 ends the paper with final conclusion and points out our future work.

2. Problem description

Assume the plant is a linear time invariant discrete time single input and single output process. The plant is denoted by a rational transfer function form P(z), and P(z)is unknown. Throughout the closed-loop experimental process, only a sequence of input–output measured data corresponding to the plant P(z) are collected. The input–output relation is described as follows. (1) y(t)=P(z)u(t)+d(t),(1) where z is a time shift operator,i.e. (zu(t)=u(t1), P(z) is one transfer function of the plant, u(t) is the measured input, y(t) is the measured output corresponding to the plant P(z), d(t) is the external noise. When d(t) in Equation (Equation1) is unknown but has known bounds, we regard to the uncertainty associated with d(t) as additive noise because of the way it enters the input–output relation in equation (Equation1).

Consider the following simple closed-loop system with two degrees of freedom controllers in Figure , the input–output relations in the whole closed-loop system are written as follows. (2) {y(t)=P(z)u(t)+d(t)u(t)=C1(z,θ)ε(t)=C1(z,θ)[r(t)C2(z,η)y(t)]ε(t)=r(t)C2(z,η)y(t)(2) where r(t) is the excited signal, M(z) is one expected transfer function, C1(z,θ) and C2(z,η) are two degrees of freedom controllers which are parametrized by the unknown parameter vectors θ and η,respectively, i.e. these two controllers C1(z,θ) and C2(z,η) are independently parametrized as the following linear affine forms. (3) C1(z,θ)=αT(z)θ,C2(z,η)=βT(z)ηα(z)=[α1(z),α2(z),,αn(z)]T,β(z)=[β1(z),β2(z),,βn(z)]Tθ=[θ1,θ2,,θn]T,η=[η1,η2,,ηn]T,(3) where α(z) and β(z) denote two known basic function vectors, θ and β are two unknown parameter vectors with dimension n. One main contribution of this paper is to construct the variance matrix for these two unknown parameter vectors θ and β by using probability theory and matrix theory in the case of probabilistic noise d(t).

Figure 1. The closed-loop system with two degrees of freedom controllers.

Figure 1. The closed-loop system with two degrees of freedom controllers.

Observing Figure , in the classical control system, C2(z,η) is always one sensor, and it sends the output signal back to the input. But in some practical control system such as flight control, vehicle control, etc, C2(z,η) maybe be one feedback controller.

Figure 2. Construction of virtual input.

Figure 2. Construction of virtual input.

Formulating Equation (Equation2) again, the new input–output relation in terms of excited input r(t) and external noise d(t) are given as. (4) y(t)=P(z)C1(z,θ)1+P(z)C1(z,θ)C2(z,η)r(t)+11+P(z)C1(z,θ)C2(z,η)d(t).(4)

3. Classical model reference control

In the closed-loop system with two degrees of freedom controllers, the first transfer function from the excited signal r(t) to measured output y(t) is closed-loop transfer function (Lecchini et al., Citation2002), and the second transfer function from the external signal d(t) to measured output y(t) is sensitivity function (Milanese & Novara, Citation2004). From Equation (Equation4), the control task of classical model reference control is to tune the unknown parameter vectors θ and η corresponding to two degrees of freedom controllers (C1(z,θ),C2(z,η) in order to achieve to the expected closed-loop transfer function and expected sensitivity function. Given the closed-loop transfer function M(z) and expected sensitivity function S(z), we want to guarantee that the closed loop transfer function approximates to its expected function M(z), and the sensitivity function tends to its expected function S(z) too (Alamo et al., Citation2005). The problem of tuning these two unknown parameter vectors θ and η are formulated as the following classical model reference control optimization problem. (5) minθ,ηJMR(θ,η)=P(z)C1(z,θ)1+P(z)C1(z,θ)C2(z,η)M(z)22+11+P(z)C1(z,θ)C2(z,η)S(z)22(5) where 22 is the common Euclidean norm. In Equation (Equation5), before solving this optimization problem with respect to unknown parameter vectors θ and η, the priori knowledge about the plant P(z) may be needed. As the plant P(z) is unknown in this model reference control design method, so the identification strategy is used to identify P(z). To avoid the identification process of the plant P(z), virtual reference feedback tuning control is proposed to directly identify the unknown parameter vectors θ and η in two controllers (C1(z,θ),C2(z,η) from the measured input–output data set ZN={u(t),y(t)}t=1N, where N is the number of data points.

4. Virtual reference feedback tuning control

Given two controllers {(C1(z,θ),C2(z,η))}, as the closed-loop transfer function from r(t) to y(t) is M(z), then we apply one arbitrary signal r(t) to excite the closed-loop system (Equation2), the output of the closed-loop system is described as that. y(t)=M(z)r(t). Consider one special excited input r¯(t), the necessary condition about that the closed-loop transfer function be M(z) is that the two closed-loop systems have the same output y(t) under a given input (Jianhong, Citation2019). During classical model reference control, this necessary condition holds in case of choosing suitable controller and excited signal. But above description does not hold, due to unknown plant. The idea of virtual reference feedback tuning control means that virtual input r¯(t) and virtual disturbance d¯(t) need to be constructed firstly, so we give the detailed process of constructing virtual input r¯(t) and virtual disturbance d¯(t).

4.1. Virtual input

Collecting input–output sequence {u(t),y(t)}t=1N of the plant P(z), then for every measured output y(t), define one virtual input r¯(t) such that. (6) y(t)=M(z)r¯(t).(6) This virtual input r¯(t) does not exist in reality and it can not be used to generate actual measured output y(t). But virtual input r¯(t) can be obtained by equation y(t)=M(z)r¯(t), i.e. y(t) is the measured output of the closed-loop system, when the excited signal r¯(t) is applied with no disturbance d(t)=0.

Due to the unknown plant P(z), and when P(z) is excited by u(t), its output is y(t), so we choose two suitable controllers {(C1(z,θ),C2(z,η))} to obtain one expected signal u(t), if the closed-loop system is excited by virtual input r¯(t) and y(t) simultaneously. The construction of virtual input r¯(t) can be seen Figure , where the tracking error ε(t) is defined as. (7) ε(t)=r¯(t)C2(z,η)y(t)=(M1(z)C2(z,η))y(t).(7) From Figure  we see that when the closed-loop system is excited by (r¯(t),y(t),d(t)=0), the expression of u(t) is get. (8) u(t)=C1(z,θ)ε(t)=C1(z,θ)(M1(z)C2(z,η))y(t).(8)

4.2. Virtual disturbance

Given the measured output y(t), define one virtual disturbance d¯(t) to guarantee that when the closed-loop system is excited by virtual disturbance d¯(t), the obtained output y¯(t) is defined as that. (9) y¯(t)=y(t)+d¯(t).(9) The construction of virtual disturbance d¯(t) is in Figure , where this virtual disturbance d¯(t) satisfied that. (10) y(t)+d¯(t)=S(z)d¯(t).(10) Equation (Equation10) is used to generate virtual disturbance d¯(t), it means that when the closed-loop system is excited by the following signals (r(t)=0,d¯(t),y¯(t)=y(t)+d¯(t), the obtained sinal u(t) is get. (11) u(t)=C1(z,θ)C2(z,η)y¯(t).(11) As y¯(t) in Equation (Equation11) is not the true output, and instead the true output is y(t), so y¯(t) in Equation (Equation11) must be changed into y(t).

Figure 3. Construction of virtual disturbance.

Figure 3. Construction of virtual disturbance.

Combining two equations (Equation9) and (Equation11), we get that. (12) y(t)=(S(z)1)d¯(t)=(S(z)1)S1(z)y¯(t).(12) Furthermore we have. (13) y¯(t)=S(z)S(z)1y(t).(13) Using equations (Equation8) and (Equation13), two unknown parameter vectors θ and η in two controllers {(C1(z,θ),C2(z,η))} can be identified by solving the following optimization problem. (14) minθ,ηJVRN(θ,η)=1Nt=1N[u(t)C1(z,θ)(M1(z)C2(z,η))y(t)]2+1Nt=1N[u(t)+C1(z,θ)C2(z,η)S(z)S(z)1y(t)]2(14) Observing optimization problem (Equation14), all variables are known except for two parametrized controllers {(C1(z,θ),C2(z,η))}. More specifically, input–output data {u(t),y(t)}t=1N can be collected by sensors, these two expected transfer functions M(z) and S(z) are priori known. Roughly speaking, the plant P(z) is not in optimization problem (Equation14), which is the main contribution in virtual reference feedback tuning control. Furthermore optimization problem (Equation14) embodied that the problem of designing two controllers can be transformed to identify two unknown parameter vectors. This transformation will simplify the latter computational complexity for designing controllers.

5. Identification algorithm

From above descriptions on virtual reference feedback tuning, the main step is to identify two unknown parameter vectors (θ,η). But now lots of identification algorithms proposed in references are based on probability distribution on external disturbance (Weyer, Citation2000). Generally to validate the applied model in engineering, some uncertainties can be added to the mathematical model. Frequently disturbance inflecting the real system has to be taken into account in the mathematical model in order to ensure a similar behaviour of the real system and the mathematical model. Here the main contribution is about statistical accuracy analysis on virtual reference feedback tuning control, not on the identification algorithm, so for the sale of completeness, here in this section we only use the simple least squares algorithm to identify those two unknown parameter vectors (θ,η). In order to apply the classical least squares algorithm, from the theoretical perspective, firstly we transform the problem of identifying unknown parameter vectors in virtual reference feedback tuning control into two linear regression models,which are suitable for least squares algorithm algorithm.

Observing the optimization problem (Equation14) in virtual reference feedback tuning control, our goal is to identify two optimal parameter vectors such that the above forms hold. Let (15) C1(z,θ)(M1(z)C2(z,η))y(t)=C1(z,θ)C2(z,η)S(z)S(z)1y(t).(15) Then (16) (M1(z)C2(z,η)+C2(z,η)S(z)S(z)1)y(t)=0.(16) It means that (17) {((S(z)1)M1(z)+C2(z,η))y(t)=0C2(z,η)y(t)(S(z)1)M1(z)y(t)=0(17) Using the Equation (Equation3) as. (18) C2(z,η)=βT(z)η=[1zz2zn][1η1η2ηn](18) Substituting the above linear affine form (Equation3) of the parametrized controller C2(z,η) into above Equation (Equation17), then we have. (19) [1zz2zn][1η1η2ηn]y(t)=(S(z)1)M1(z)y(t),(19) where one special basic function vector is used here. α(z)=β(z)=[1zz2zn]T Rewriting above Equation (Equation19) as one linear regression model. y(t)=y(t1)η1+y(t2)η2++y(tn)ηn+(S(z)1)M1(z)y(t). Hence (20) y(t)=M(z)M(z)S(z)+1×[y(t1)y(t2)y(tn)][η1η2ηn](20) Introduce the regression vector ϕ(t) as that. ϕ(t)=M(z)M(z)S(z)+1×[y(t1)y(t2)y(tn)] Then Equation (Equation19) will be a linear regression model. (21) y(t)=ϕ(t)Tη(21) After the unknown parameter vector η is identified and substituting its estimator η^ into the parametrized controller, then controller C2(z,η^) is obtained. Applying this obtained controller C2(z,η^) into equation (Equation17), then we have. (22) u(t)=C1(z,θ)C2(z,η^)S(z)S(z)1y(t)=S(z)C2(z,η^)S(z)1y(t)[1zz2zn][1θ1θ2θn](22) Similarly based on that special basic function vector α(z), Equation (Equation22) can be also written as another linear regression model. (23) u(t)=φ(t)Tθ,(23) where the regressor vector φ(t) is defined as. φ(t)=S(z)C2(z,η^)S(z)1y(t)[1zz2zn] From these two linear regression models (Equation21) and (Equation23), we see that firstly the unknown parameter vector η can be identified on basis of linear regression model (Equation21), then after substituting its estimator η^ into the regressor vector α(z), another unknown parameter vector θ is obtained from linear regression model (Equation23). So here we only give a detailed least squares identification algorithm to identify the unknown parameter vector η in linear regression model (Equation23), the identification of θ is similar to our derivation.

Observing that linear regression model (Equation21) with respect to unknown parameter vector η, its least squares algorithm is easily used here, i.e. (24) η^=[t=1Nϕ(t)ϕT(t)]1[t=1Nϕ(t)y(t)](24) Also its general recursive least squares algorithm is given as. (25) η^(t)=η^(t1)+g(t)[y(t)ϕT(t)η^(t1)]g(t)=P(t1)ϕ(t)[1+ϕT(t)P(t1)ϕ(t)]1P(t)=P(t1)g(t)ϕT(t)P(t1),(25) where η^(t) denotes the parameter estimator at iterative step t, and g(t) is one gain matrix. Furthermore recursive form (Equation25) can be written in the following simplified form. (26) η^(t)=η^(t1)+P(t)ϕ(t)[y(t)ϕT(t)η^(t1)].(26) This above alternative expression will be useful in the subsequent analysis.

To make the above recursive form hold and retain the convergence of the above least squares algorithm, one condition is needed to guarantee that inverse matrix ϕ(t)ϕT(t) exist, roughly speaking, this condition corresponds to the persistent excitation, that is described as follows in detail.

Remark

Persistence of excitation

The regression vector ϕ(t) is persistent excitation with a level of excitation α0>0, if there exist constant α1>0, such that. α0I[t=1Nϕ(t)ϕT(t)]α1I,t0, where I is one unit matrix.

The above inequality means matrix ϕ(t)ϕT(t) is non-singular for all t, the concept of persistent excitation requires that ϕ(t) varies in such a way with time, that the sum of the matrix ϕ(t)ϕT(t) is uniformly positive definite over any time instant t=1,2N.

6. Statistical accuracy analysis

Accuracy analysis is one important factor in system identification and parameter estimation theory, as it can not only show the accuracy property for the parameter estimator, but also provide one useful tool to other research fields, such as optimal input design and model structure validation, etc. Two statistical variables are always used in accuracy analysis, i.e. expectation and variance, as variance can measure the approximation error between the true value and its estimator. If the approximation error is not tolerable, it means the parameter estimator is biased, and the whole process of system identification must be repeated until to obtain one unbiased estimator. The main contribution of this section is to derive the variance matrix for those two unknown parameter vectors (θ,η) through our own mathematical derivation.

Observing that optimization problem (Equation14) and applying the collected data set ZN={u(t),y(t)}t=1N, where N is the number of total data points. Two unknown parameter vectors (θ,η) are identified by solving the following optimization problem. (27) (θ^N,η^N)=argmin(θ,η)JVRN(θ,η).(27) Then the asymptotic estimators are derived as the following limit form. (28) (θ,η)=argmin(θ,η)limNEJVRN(θ,η),(28) where E is the expectation operation, and EJVRN(θ,η) is rewritten as. (29) EJVRN(θ,η)=E[M1(z)u(t)C1(z,θ)y(t)+M1(z)C1(z,θ)C2(z,η)y(t)]2+E[u(t)+C1(z,θ)C2(z,η)M2(z)y(t)]2,(29) where M1(z) and M2(z) are defined as M1(z)=M(z),M2(z)=S(z)S(z)1. From the point of probability theory (Campi & Weyer, Citation2002), the following asymptotic relations hold on the condition of N. (30) JVRN(θ,η)JVR(θ,η)=EJVRN(θ,η),(θ^N,η^N)(θ,η),N.(30) Using the first order optimal necessary condition, if (θ^N,η^N) and (θ,η) are minimal values for their own cost functions,respectively, the following relations hold. (31) [JVRN(θ,η)θ](θ^N,η^N)=0;[JVR(θ,η)θ](θ,η)=0,(31) (32) [JVRN(θ,η)η](θ^N,η^N)=0;[JVR(θ,η)η](θ,η)=0.(32) Expanding the above equations with respect to θ, then we get. (33) 2Nt=1N[M1(z)u(t)C1(z,θ)y(t)+M1(z)C1(z,θ)C2(z,η)y(t)]×[C1(z,θ)θy(t)+M1(z)C1(z,θ)θC2(z,η)y(t)]+2Nt=1N[u(t)+C1(z,θ)C2(z,η)M2(z)y(t)]×[C1(z,θ)θC2(z,η)M2(z)y(t)]=0.(33) Similarly expanding the above equations with respect to η, then we get. (34) 2Nt=1N[M1(z)u(t)C1(z,θ)y(t)+M1(z)C1(z,θ)C2(z,η)y(t)]×[M1(z)C1(z,θ)C2(z,η)ηy(t)]+2Nt=1N[u(t)+C1(z,θ)C2(z,η)M2(z)y(t)]×[C2(z,η)ηC1(z,θ)M2(z)y(t)]=0.(34) Let ξ=(θ,η), then above two equations are rewritten as. (35) ξ^N=(θ^N,η^N)=argminξ=(θ,η)JVRN(ξ)=argminξ=(θ,η)JVRN(θ,η)ξ=(θ,η)=argminξ=(θ,η)JVR(ξ)=argminξ=(θ,η)JVR(θ,η)(35) (36) [JVRN(ξ)ξ]ξ^N=[JVRN(ξ)θJVRN(ξ)η]ξ^N=(θ^N,η^N)=0.(36) Using Taylor series expansion formula, the first order partial derivative can be expanded around the asymptotic limit point ξ=(θ,η), i.e. (37) [JVRN(ξ)ξ]ξ^N=[JVRN(ξ)ξ]ξ+[2JVRN(ξ)ξ2]ξ(ξ^Nξ).(37) Expanding above equation to get. (38) [00]=[JVRN(ξ)θJVRN(ξ)η]ξ+[2JVRN(ξ)θ22JVRN(ξ)η2]ξ[θ^Nθη^Nη](38) Observing Equation (Equation37) again, the following relation holds. (39) (ξ^Nξ)=[2JVRN(ξ)θ22JVRN(ξ)θη2JVRN(ξ)θη2JVRN(ξ)η2]1×[JVRN(ξ)θJVRN(ξ)η](39) From Equation (Equation39). the asymptotic result corresponding to the unknown parameter vector ξ is derived as follows. (40) N(ξ^Nξ)N(0,A),asN,(40) where N(0,A) denotes one normal distribution with zero mean and variance matrix A. Equation (Equation40) tells us that the parameter estimator ξ^N converges to its true asymptotic limit value ξ, and the approximation error approaches to one Gaussian stochastic variable. Its asymptotic variance matrix A is defined as follows. (41) A=[AθAθηAθηAη]=[2JVRN(ξ)θ22JVRN(ξ)θη2JVRN(ξ)θη2JVRN(ξ)η2]1cov[JVRN(ξ)θJVRN(ξ)η]×[2JVRN(ξ)θ22JVRN(ξ)θη2JVRN(ξ)θη2JVRN(ξ)η2]T(41) Above Equation (Equation41) tells us these two parameter estimators are two stochastic variables, and the main diagonal elements in matrix A are needed to derive their detailed expressions, as these main diagonal elements correspond to the asymptotic variance sub-matrices about these two unknown parameter vectors (θ,η).

Computing the basic partial derivative, the following results hold. (42) JVRN(ξ)θ=2Nt=1N[M1(z)u(t)C1(z,θ)y(t)+M1(z)C1(z,θ)C2(z,η)y(t)]×[C1(z,θ)θy(t)+M1(z)C1(z,θ)θC2(z,η)y(t)]+2Nt=1N[u(t)+C1(z,θ)C2(z,η)M2(z)y(t)]×[C1(z,θ)θC2(z,η)M2(z)y(t)]JVRN(ξ)η=2Nt=1N[M1(z)u(t)C1(z,θ)y(t)+M1(z)C1(z,θ)C2(z,η)y(t)]×[M1(z)C1(z,θ)C2(z,η)ηy(t)]+2Nt=1N[u(t)+C1(z,θ)C2(z,η)M2(z)y(t)]×[C2(z,η)ηC1(z,θ)M2(z)y(t)](42) Using the property of the parametrized controllers separably, it holds that. (43) 2C1(z,θ)θ2=2C2(z,η)η2=0.(43) Through some basic partial derivative calculations, we get. 2JVRN(θ,η)θ2=2Nt=1N[C1(z,θ)θy(t)+M1(z)C1(z,θ)θC2(z,η)y(t)]2+2Nt=1N[C1(z,θ)θC2(z,η)M2(z)y(t)]22JVRN(θ,η)η2=2Nt=1N[M1(z)C1(z,θ)C2(z,η)ηy(t)]2+2Nt=1N[C2(z,η)ηC1(z,θ)M2(z)y(t)]2 (44) 2JVRN(ξ)θη=2Nt=1N[M1(z)C1(z,θ)C2(z,η)ηy(t)]×[C1(z,θ)θy(t)+M1(z)C1(z,θ)θC2(z,η)y(t)]2+2Nt=1N[M1(z)u(t)C1(z,θ)y(t)+M1(z)C1(z,θ)C2(z,η)y(t)]×[M1(z)C1(z,θ)θC2(z,η)ηy(t)]+2Nt=1N[C1(z,θ)C2(z,η)ηM2(z)y(t)]×[C1(z,θ)θC2(z,η)M2(z)y(t)]+2Nt=1N[u(t)C1(z,θ)C2(z,η)M2(z)y(t)]×[C1(z,θ)θC2(z,η)ηM2(z)y(t)](44) 2JVRN(ξ)θη=2JVRN(ξ)ηθ Taking expectation operation on both sides of Equation (Equation42), we have. (45) E[JVRN(ξ)θ]=M1(z)u(t)[C1(z,θ)θ+M1(z)C1(z,θ)θC2(z,η)]×P(z)u(t)+[M1(z)C1(z,θ)C2(z,η)C1(z,θ)]×[C1(z,θ)θ+M1(z)C1(z,θ)θC2(z,η)]×[P2(z)u2(t)+σd2]+M1(z)M2(z)C1(z,θ)θd(t)P(z)u(t)+[C1(z,θ)C2(z,η)M2(z)][C1(z,θ)θC2(z,η)M2(z)]×[P2(z)u2(t)+σd2](45) where in deriving above Equation (Equation45), the input–output relation (Equation1) is used here, and σd2 denotes the variance for the external disturbance d(t).

After applying some knowledge from probability theory (Campi & Weyer, Citation2002) to obtain. (46) cov[JVRN(ξ)θ]=E[JVRN(ξ)θE[JVRN(ξ)θ]]2=[(M1(z)C2(z,η)1)2C1(z,θ)C1(z,θ)θ×2P(z)u(t)d(t)+C1(z,θ)θC2(z,η)M2(z)u(t)d(t)+M22(z)C1(z,θ)C22(z,η)C1(z,θ)θ2P(z)u(t)d(t)]2(46) Using Parseval's relation to get. (47) cov[JVRN(ξ)θ]=12πππx1ϕu(w)σd2dw,(47) where ϕu(w) is the input power spectrum, and x1 is defined as. x1=M1(z)C2(z,η)1)2C1(z,θ)C1(z,θ)θ+C1(z,θ)θC2(z,η)M2(z)+M22(z)C1(z,θ)C22(z,η)C1(z,θ)θ2 Similarly other asymptotic sub-matrix is also rewritten as. cov[JVRN(ξ)η]=12πππx2ϕu(w)σd2dw (48) x2={[2P(z)C1(z,θ)[M1(z)C2(z,η)1]+M1(z)]C2(z,η)η×M1(z)C1(z,θ)C2(z,η)η}2+{C2(z,η)ηC1(z,θ)M2(z)+C1(z,θ)C2(z,η)M2(z)C2(z,η)η2P2(z)}2(48) Equations (Equation47) and (Equation48) denote the auto-covariance estimations for two gradient estimations.

Then the cross covariance estimation between two gradient estimations is that. (49) cov(JVRN(ξ)θ,JVRN(ξ)η)=12πππx32P(z)ϕu(w)σd2dwx3=(x1x2)1/2.(49) Substituting equations (Equation47), (Equation48) and (Equation49) into the variance matrix with respect to the first order gradient estimation, we obtain the following result. (50) cov[JVRN(ξ)θJVRN(ξ)η]=[x1Eu2(t)σd2x32P(z)Eu2(t)σd2x32P(z)Eu2(t)σd2x2Eu2(t)σd2](50) When N, the following equities hold. (51) [2JVRN(ξ)θη]ξ^NE[2JVRN(ξ)θη]ξ=x4Eu2(t)+x5σd2x4=M1(z)C1(z,θ)C2(z,η)η(C1(z,θ)θ+M1(z)C1(z,θ)θC2(z,η)+(M2(z)C2(z,η)1)C1(z,θ)M1(z)×C1(z,θ)θC2(z,η)η)P2(z)(51) (52) [2JVRN(ξ)η2]η^NE[2JVRN(ξ)η2]η=x6Eu2(t)+x7σd2x6=M12(z)C12(z,θ)C2(z,η)ηP2(z)x7=M12(z)C12(z,θ)C2(z,η)η(52) (53) [2JVRN(ξ)θ2]θ^NE[2JVRN(ξ)θ2]θ=x8Eu2(t)+x9σd2x8=(M1(z)C1(z,θ)θC2(z,η)C1(z,θ)θ)P2(z)x9=(M1(z)C1(z,θ)θC2(z,η)C1(z,θ)θ)(53) Applying equations (Equation51), (Equation52) and (Equation53) into the matrix computation process, we have the following results. (54) [2JVRN(ξ)θ22JVRN(ξ)θη2JVRN(ξ)θη2JVRN(ξ)η2][x8Eu2(t)+x9σd2x4Eu2(t)+x5σd2x4Eu2(t)+x5σd2x6Eu2(t)+x7σd2](54) Due to the inverse matrix of matrix (Equation54) is needed during the computation process of the asymptotic variance matrix A, so here we use the property of the block matrix to get. (55) [2JVRN(ξ)θ22JVRN(ξ)θη2JVRN(ξ)θη2JVRN(ξ)η2]1=[x6ϕu+x7σd2(x4ϕu+x5σd2)(x4ϕu+x5σd2)x8ϕu+x9σd2](x6ϕu+x7σd2)(x8ϕu+x9σd2)(x4ϕu+x5σd2)2(55) where in about equation, to simplify notation, we use the relation between the input power spectrum ϕu(w) and its corresponding correlation function Eu2(t). Furthermore let us drop the argument w in the input power spectrum ϕu(w), i.e. ϕu(w)=Eu2(t). Substituting equations (Equation50) and (Equation55) into the computation of asymptotic variance matrix A,then the asymptotic variance matrix expression corresponding to those two unknown parameter vectors (θ,η) is derived as that. (56) A=[x6ϕu+x7σd2(x4ϕu+x5σd2)(x4ϕu+x5σd2)x8ϕu+x9σd2][(x6ϕu+x7σd2)(x8ϕu+x9σd2)(x4ϕu+x5σd2)2]2×[x1ϕuσd2x32P(z)ϕuσd2x32P(z)ϕuσd2x2ϕuσd2]×[x6ϕu+x7σd2(x4ϕu+x5σd2)(x4ϕu+x5σd2)x8ϕu+x9σd2](56) Formulating the product of three matrices in above Equation (Equation56) as follows. (57) [x6x4x7x5x4x8x5x9][ϕu00ϕuσd200σd2]ϕuσd2×[x1x32P(z)x32P(z)x2][ϕu0σd200ϕu0σd2]×[x6x4x4x8x7x5x5x9](57) Now observing this obtained asymptotic variance matrix A, the sub-matrix in the block (1,1) is the asymptotic variance matrix cov(θ^N) with respect to the unknown parameter vector θ. Similarly the sub-matrix in the block (2,2) is the asymptotic variance matrix cov(η^N) with respect to the unknown parameter vector η. As these two sub-matrices are all on the main diagonal, so only the results about these two sub-matrices on the main diagonal are given. After simple but tedious calculation, the asymptotic variance matrix A is given as the following form. (58) A=[A11A22]+σd2×[x1x724x3x5x7P+x2x52x1x524x3x5x9P+x2x92](58) (59) A11=(2x1x6x74x3x5x6P(z)+2x2x4x54x3x4x7P(z))σd2A22=(2x1x4x54x3x4x92x3x5x8+2x2x8x9)σd2(59) From Equation (Equation58), the asymptotic variance matrix about those two controller parameter vectors (θ,η) in the closed-loop system is derived. The main contribution of this paper is to use probability theory and matrix theory to derive this asymptotic variance matrix, then asymptotic variance matrices cov(θ^N) and cov(η^N) are easily simplified as follows. (60) cov(θ^N)=A11+x1x724x3x5x7P(z)+x2x52,cov(η^N)=A22+x1x524x3x5x9P(z)+x2x92.(60) Moreover the missions of these two asymptotic variance matrices cov(θ^N) and cov(η^N) can be seen in next Section 7.

7. Application

Consider those two asymptotic variance matrices cov(θ^N) and cov(η^N) obtained in above section 6 through our own mathematical derivations, the idea case is that the asymptotic variance matrix cov(θ^N) or cov(η^N) may be zero or approach to zero with N. So those two asymptotic variance matrices cov(θ^N) and cov(η^N) can measure the identification quality or testify whether our parameter estimators are efficient or asymptotic efficient (Care et al., Citation2018). In order to achieve the efficient or asymptotic efficient property, the asymptotic variance matrix obtained here is an important tool to consider. In Section 7, we only give two applications with respect to the asymptotic variance matrices, i.e. optimal input design and model structure validation.

7.1. Optimal input design

The problem of optimal input design is to design an optimal input power spectrum ϕu(w), such that the chosen performance index will be minimized with respect to its decision variable ϕu(w) (Weyer et al., Citation2017). Here for convenience, the trace operation of that asymptotic variance matrix A is chosen as the considered performance index. The goal is to minimize this performance index through designing one optimal input power spectrum ϕu(w), i.e. from Equation (Equation60), the performance index is chosen as the trace operation of the asymptotic variance matrix A. Then we have the following optimization problem. (61) minϕu×(x1x624x3x4x6P(z)+x2x42+x1x42+x2x82)ϕu2[(x6ϕu+x7σd2)(x8ϕu+x9σd2)(x4ϕu+x5σd2)2]2+x5x6ϕu[(x6ϕu+x7σd2)(x8ϕu+x9σd2)(x4ϕu+x5σd2)2]2.(61) After introducing coefficients {b2,b1,b0} and {a2,a1,a0} to replace some variables in Equation  (Equation61), then the optimization problem (Equation61) can be simplified as. (62) minϕu(w)b2ϕu2(w)+b1ϕu(w)+b0a2ϕu2(w)+a1ϕu(w)+a0.(62) Using the optimal necessary condition, it suffices to prove the following result holds while satisfying the minimization property, i.e. (63) [b2ϕu2(w)+b1ϕu(w)+b0a2ϕu2(w)+a1ϕu(w)+a0]ϕu(w)=0.(63) By differentiating with respect to input power spectrum ϕu(w) and by setting the derivative equal to zero, then we have. (64) (a1b2a2b1)ϕu2(w)+(2a0b22a2b0)ϕu(w)+a2b1a1b0=0.(64) Making use of the formula of roots, the root of Equation (Equation64) can be obtained easily, i.e. (65) ϕu(w)=(a0b2a2b0)(a1b2a2b1)+[(a0b2a2b0)2(a1b2a2b1)(a2b1a1b0)]1/2(a1b2a2b1)(65) Equation (Equation65) is one optimal input power spectrum, it can be used into the system identification theory to design the identification experiment.

7.2. Model structure validation

Model structure validation is also from system identification theory, and the standard cross-correlation test is proposed to test the confidence interval for the unknown parameter estimator. Then our obtained asymptotic variance matrix is used to construct one uncertain bound for the parameter estimator. This uncertain bound is named as the confidence interval and it constitutes the guaranteed confidence region test with respect to the parameter estimator under the closed-loop condition.

Based on the obtained asymptotic variance matrix cov(θ^N), the parameter estimator θ^N converges to its limit value θ, and furthermore θ^N will asymptotically converge to one normal distributed random variable with mean value θ and variance matrix cov(θ^N), then it holds that. (66) N(θ^Nθ)N(0,cov(θ^N)),asN.(66) The asymptotic relation (Equation66) can be rewritten as one quadratic form, then one λ2 distribution is get. (67) N(θ^Nθ)[cov(θ^N)]1(θ^Nθ)λn2asN,(67) where n is the number of degrees of freedom in λ2 distribution, being equal to the dimension or number of the unknown parameters.

Equation (Equation67) implies the random variable θ^N satisfied one uncertain bound. (68) θ^ND(α,θ);D(α,θ)={θ|N(θθ)[cov(θ^N)]1(θθ)Tλn,α2}(68) with λn,α2 corresponding to a probability level α in λ2 distribution.

When to quantity the uncertainty on θ, rather than on θ^N, for every realization θ^N, it holds that. (69) θ^ND(α,θ)θD(α,θ^N).(69) It shows that (70) θD(α,θ^N)withprobabilityαD(α,θ^N)={θ|N(θ^Nθ)[cov(θ^N)]1(θ^Nθ)Tλn,α2}.(70) Two Equations (Equation69) and (Equation70) give the confidence intervals for the unknown parameter vector θ, then we see the probability level of the event θ^ND(α,θ) holds is at least θ. Similarly the above analysis process hold for the other unknown parameter vector η, it is omitted here due to no repeat.

8. Simulation example

Here in this section, one example is used to prove the efficiency of our proposed theories about virtual reference feedback tuning control.

Consider the following discrete time linear time invariant system, the true transfer function form P(z) is given as follows. P(z)=(z1.2)(z0.4)z(z0.3)(z0.8) Furthermore two classical PID controllers are used during the later simulation process. C1(z,θ)=αT(z)θ=[z2z2zzz2z1z2z][θ1θ2θ3]C2(z,η)=βT(z)η=[z4z4zz3z4zz2z4zzz4z1z2z]×[η1η2η3η4η5] Two true PID controllers are given as: C1(z,θ)=αT(z)θ=[z2z2zzz2z1z2z][0.80.40.3]C2(z,η)=βT(z)η=[z4z4zz3z4zz2z4zzz4z1z2z]×[0.30.20.10.10.15] The expected closed-loop transfer function and the expected sensitivity function are chosen as follows. M(z)=z(z1)(0.86z41.1z3+3.9z2+0.8z+0.48)z73z60.96z50.72z40.93z3+3.9z2+0.8z+0.48S(z)=0.04z5+0.01z40.006z30.035z20.04z+0.010.5z6+z5z4+0.24z3+0.12z2+0.1z+1

In the whole simulation process, collect the input–output measured data {u(t),y(t)}t=1,21000 within the closed-loop environment, and the number of data points which we can collect is equal to 2000. Apply this virtual reference feedback tuning control to design the controller parameters, the plant model P(z) is excited by one sinusoidal input signal, whose curve is plotted in Figure  with the number of data points is 1000. In Figure , there are two curves, as one is the continue curve, the other is its discrete curve. The measured output response curve is plotted in Figure . The classical least squares algorithm is used to solve the optimization problem (Equation14). Before starting this recursive algorithm, the initial values of the unknown parameter vector are selected as. θ=[0.70.20.1],η=[0.260.10.010.010.071] Based on the input and output data {u(t),y(t)}t=1,21000, which are sampled in their own curves,respectively, we need to prove two aspects here, i.e. one is the identification algorithm, and the other is the statistical accuracy analysis or variance analysis with respect to the unknown controller parameters.

Figure 4. The input signal.

Figure 4. The input signal.

Figure 5. The measured output signal.

Figure 5. The measured output signal.

Firstly one experiment is given with the input signal in Figure , and the output signal in Figure , then the classical recursive least squares algorithm is applied to identify the unknown parameter vectors (θ,η). After the parameter estimators (θ^N,η^N) are obtained, how about the identification accuracy for these parameter estimators (θ^N,η^N)? To illustrate this aspect, we substitute the parameter vector θ^N into the closed-loop transfer function from the excited signal r(t) to the measured output y(t) and compare this actual closed-loop transfer function with its expected or designed closed-loop transfer function M(z). Figure  shows the comparisons of the Bode responses between this actual closed-loop transfer function with its expected closed-loop transfer function M(z). From Figure , we see these two closed loop transfer functions are very close, it means their approximation error can be neglected. Furthermore in Figure , the open-loop transfer function from r(t) to y(t) is given too, then the merit of the closed-loop system is easily seen.Secondly to illustrate the statistical accuracy analysis or variance analysis for the parameter estimators (θ^N,η^N), for clarity of presentation, we only give the variance curves for the parameter estimator θ^N in Figure , as only three elements exist in the parameter estimator θ^N. Observing these three variance curves with respect to the three unknown parameters {θ^1,θ^2,θ^3}, from the initialization to 1 s, three variance errors are almost same, then from 1 to 3 s, three variance errors will start to oscillate, and these generated derivation errors experience until to 3 s. From 3 s, three variance errors are stable and approach to one same constant value. Moreover form the variance analysis with respect to the three elements in the parameter vector θ, the classical least squares algorithm is not suited to identify the unknown controller parameters, as the parameter estimators identified by this algorithm are all biased. This identification result is not our expected goal, so other identification algorithms are needed to consider again, for example, our proposed identification algorithms from previous published papers (Jianhong, Citation2012Citation2013Citation2014aCitation2014bCitation2017Citation2019). Generally statistical accuracy analysis can used to testify whether or not the identification accuracy is efficient or asymptotic efficient.

Figure 6. Comparisons of the Bode responses.

Figure 6. Comparisons of the Bode responses.

9. Conclusion

  1. In this paper, from a statistical point of view, statistical accuracy analysis for virtual reference feedback tuning is studied for closed-loop system with two degrees of freedom controllers. When two desired closed-loop transfer functions are given, virtual input and virtual disturbance are constructed respectively to obtain one cost function, which corresponds to two model-matching problems. The classical least squares algorithm is applied to identify the unknown controller parameters on the condition of transforming that cost function as its simplified linear regression form. Then probability theory and matrix theory are introduced to derive the asymptotic variance matrix of the parameter vector, so that optimal input design and model structure validation can be analysed based on our obtained asymptotic variance matrix.

  2. As this paper is our extended previous research on the basis of our many published papers, relating with virtual reference feedback tuning control, our goal in this paper is to give a complete analysis and summary for our previous work. Then our future work will turn to nonlinear and adaptive control.

Figure 7. Variance analysis for the first three parameters.

Figure 7. Variance analysis for the first three parameters.

Acknowledgments

This work was supported by the Natural Science Foundation of China (Grant No. 60874037) and the Grants from the Foundation of Jiangxi University of Science and Technology (No. jxxjb18020). The first author is grateful to Professor Eduardo F Camacho for his warm invitation in his control lab at the University of Seville, Seville, Spain. Thanks for his assistance and advice on zonotopes in guaranteed state estimation and model predictive control.

Disclosure statement

No potential conflict of interest was reported by the authors.

Data availability statement

The data used to support the findings of this study are available from the corresponding author upon request.

Additional information

Funding

This work was supported by the Natural Science Foundation of China [grant number 60874037] and the Grants from the Foundation of Jiangxi University of Science and Technology [grant number jxxjb18020].

References