The Computation of Average Optimal Policies in Denumerable State Markov Decision Chains

This paper studies the expected average cost control problem for discrete-time Markov decision processes with denumerably infinite state spaces. A sequence of finite state space truncations is defined such that the average costs and average optimal policies in the sequence converge to the optimal average cost and an optimal policy in the original process. The theory is illustrated with several examples from the control of discrete-time queueing systems. Numerical results are discussed.

[1]  E. Seneta,et al.  Computation of the stationary distribution of an infinite stochastic matrix of special form , 1974, Bulletin of the Australian Mathematical Society.

[2]  Kai Lai Chung,et al.  Markov Chains with Stationary Transition Probabilities , 1961 .

[3]  Winfried K. Grassmann,et al.  Regenerative Analysis and Steady State Distributions for Markov Chains , 1985, Oper. Res..

[4]  Daniel P. Heyman Approximating the stationary distribution of an infinite stochastic matrix , 1991 .

[5]  Rolando Cavazos-Cadena,et al.  Comparing recent assumptions for the existence of average optimal stationary policies , 1992, Oper. Res. Lett..

[6]  D. Wolf,et al.  Approximation of the invariant probability measure of an infinite stochastic matrix , 1980, Advances in Applied Probability.

[7]  Eugene Seneta,et al.  Augmented truncations of infinite stochastic matrices , 1987 .

[8]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[9]  D. White Finite-state approximations for denumerable-state infinite-horizon discounted Markov decision processes , 1980 .

[10]  B. Fox Finite-state approximations to denumerable-state dynamic programs , 1971 .

[11]  E. Altman,et al.  Approximations in Dynamic Zero-Sum Games II , 1997 .

[12]  O. Hernondex-lerma,et al.  Adaptive Markov Control Processes , 1989 .

[13]  R. Cavazos-Cadena Finite-state approximations for denumerable state discounted markov decision processes , 1986 .

[14]  E. Altman,et al.  Approximations In Dynamic Zero-Sum Games , 1994 .

[15]  L. Sennott Another set of conditions for average optimality in Markov control processes , 1995 .

[16]  T. Apostol Mathematical Analysis , 1957 .

[17]  R. Bellman Dynamic programming. , 1957, Science.

[18]  L. Sennott The Average Cost Optimality Equation and Critical Number Policies , 1993 .

[19]  Linn I. Sennott,et al.  Average Cost Optimal Stationary Policies in Infinite State Markov Decision Processes with Unbounded Costs , 1989, Oper. Res..

[20]  Lyn C. Thomas,et al.  Finite state approximation algorithms for average cost denumerable state Markov decision processes , 1985 .

[21]  M. K. Ghosh,et al.  Discrete-time controlled Markov processes with average cost criterion: a survey , 1993 .

[22]  V. Borkar On Minimum Cost Per Unit Time Control of Markov Chains , 1984 .

[23]  V. Borkar Control of Markov chains with long-run average cost criterion: the dynamic programming equations , 1989 .