ABSTRACT
In this paper, we analyse the convergence and stability properties of generalised policy iteration (GPI) applied to discrete-time linear quadratic regulation problems. GPI is one kind of the generalised adaptive dynamic programming methods used for solving optimal control problems, and is composed of policy evaluation and policy improvement steps. To analyse the convergence and stability of GPI, the dynamic programming (DP) operator is defined. Then, GPI and its equivalent formulas are presented based on the notation of DP operator. The convergence of the approximate value function to the exact one in policy evaluation is proven based on the equivalent formulas. Furthermore, the positive semi-definiteness, stability, and the monotone convergence (PI-mode and VI-mode convergence) of GPI are presented under certain conditions on the initial value function. The online least square method is also presented for the implementation of GPI. Finally, some numerical simulations are carried out to verify the effectiveness of GPI as well as to further investigate the convergence and stability properties.
Acknowledgements
The authors appreciate the associate editor and anonymous reviewers for their valuable suggestions.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1. In this paper, a (linear) policy μ(x) = −Kx, or simply, K is said to be stabilising (or a stabilising policy) if all eigenvalues of A − BK lie on the interior of the unit circle, centred at the origin, in the complex domain.
2. In the case of PI, the initial stabilising policy guarantees that all of the updated policies are stabilising which again guarantees the existence of the limit point (μi = −Kix) of (Equation15
(15)
(15) ).