Mathematical programming and the control of Markov chains

Linear programming versions of some control problems on Markov chains are derived, and are studied under conditions which occur in typical problems which arise by discretizing continuous time and state systems or in discrete state systems. Control interpretations of the dual variables and simplex multipliers are given. The formulations allows the treatment of ‘state space’ like constraints which cannot be handled conveniently with dynamic programming. The relation between dynamic programming on Markov chains and the deterministic discrete maximum principle is explored, and some insight is obtained into the problem of singular stochastic controls (with respect to a stochastic maximum principle).