Abstract
In this paper, we analyse the convergence properties of the Dynamic Programming Value Iteration algorithm by exploiting the stability theory of discrete-time switched affine systems. More specifically, by formulating the Value Iteration algorithm as a switched affine system, a Lyapunov-based optimal policy selection strategy is designed to guarantee the practical stabilisation of the resulting system towards an invariant set of attraction containing a given target value function. The switching control algorithm, referred to as Lyapunov-based Value Iteration algorithm, can be regarded as a convergence analysis tool and can be adopted to verify if and how such target value function can be approached by choosing from a subset of suitable stationary policies, at each time slot. The usage of the proposed algorithm in practice is also discussed. Finally, two different applications are provided to further illustrate and examine the key-aspects of the approach presented.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 ,
, where
is some positive real scalar.
2 Note that it is , with
a stochastic matrix, being the convex combination of stochastic matrices. From
, it follows
Schur stable for all
.
3 A ‘bad’ policy is commonly defined as a policy which leads to low cumulative rewards, Bertsekas (Citation2012).
5 Note that it is an arrangement counting problem with constraints.