Abstract
Linear programming versions of some control problems on Markov chains are derived, and are studied under conditions which occur in typical problems which arise by discretizing continuous time and state systems or in discrete state systems. Control interpretations of the dual variables and simplex multipliers are given. The formulations allows the treatment of ‘state space’ like constraints which cannot be handled conveniently with dynamic programming. The relation between dynamic programming on Markov chains and the deterministic discrete maximum principle is explored, and some insight is obtained into the problem of singular stochastic controls (with respect to a stochastic maximum principle).
Notes
†Communicated by Dr. A. T. Fuller. H. J. K. was supported in part by the National Science Foundation under Grant No. GK-2788, in part by the National Aeronautics and Space Administration under Grant No. NGL-40–002–015 and in part by the Air Force Office of Scientific Research under Grant No. AF-AFOSR 67–0693A. A. J. K. was supported by the National Science Foundation under Grant No. GK-2788.