Achieving Target State-Action Frequencies in Multichain Average-Reward Markov Decision Processes

In this paper we address a basic problem that arises naturally in average-reward Markov decision processes with constraints and/or nonstandard payoff criteria: Given a feasible state-action frequency vector “the target”, construct a policy whose state-action frequencies match those of the target vector. While it is well known that the solution to this problem cannot, in general, be found in the space of stationary randomized policies, we construct a solution that has “ultimately stationary” structure: It consists of two stationary policies where the first one is used initially, and then the switch to the second one is made at a certain random switching time. The computational effort required to construct this solution is minimal. We also show that our problem can always be solved by a stationary policy if the original MDP is “extended” by adding certain states and actions. The solution in the original MDP is obtained by mapping the solution in the extended MDP back to the original process.

[1]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[2]  A. S. Manne Linear Programming and Sequential Decisions , 1960 .

[3]  Ying Huang,et al.  On Finding Optimal Policies for Markov Decision Chains: A Unifying Framework for Mean-Variance-Tradeoffs , 1994, Math. Oper. Res..

[4]  Keith W. Ross,et al.  Variability Sensitive Markov Decision Processes , 1992, Math. Oper. Res..

[5]  Keith W. Ross,et al.  Randomized and Past-Dependent Policies for Markov Decision Processes with Multiple Constraints , 1989, Oper. Res..

[6]  Keith W. Ross,et al.  Markov Decision Processes with Sample Path Constraints: The Communicating Case , 1989, Oper. Res..

[7]  Cyrus Derman,et al.  Finite State Markovian Decision Processes , 1970 .

[8]  Jerzy A. Filar,et al.  Hamiltonian Cycles and Markov Chains , 1994, Math. Oper. Res..

[9]  Keith W. Ross,et al.  Multichain Markov Decision Processes with a Sample Path Constraint: A Decomposition Approach , 1991, Math. Oper. Res..

[10]  Arie Hordijk,et al.  Constrained Undiscounted Stochastic Dynamic Programming , 1984, Math. Oper. Res..

[11]  E. Altman,et al.  Markov decision problems and state-action frequencies , 1991 .

[12]  A. Hordijk,et al.  Linear Programming and Markov Decision Chains , 1979 .

[13]  L. C. M. Kallenberg,et al.  Linear programming and finite Markovian control problems , 1984 .

[14]  Jerzy A. Filar,et al.  Variance-Penalized Markov Decision Processes , 1989, Math. Oper. Res..

[15]  Jerzy A. Filar,et al.  A Weighted Markov Decision Process , 1992, Oper. Res..

[16]  D. Krass,et al.  Percentile performance criteria for limiting average Markov decision processes , 1995, IEEE Trans. Autom. Control..

[17]  D. J. White Technical Note - Dynamic Programming and Probabilistic Constraints , 1974, Oper. Res..

[18]  C. Derman,et al.  A Note on Memoryless Rules for Controlling Sequential Control Processes , 1966 .

[19]  D. White Mean, variance, and probabilistic criteria in finite Markov decision processes: A review , 1988 .

[20]  Linn I. Sennott,et al.  Constrained Average Cost Markov Decision Chains , 1993, Probability in the Engineering and Informational Sciences.

[21]  E. Denardo On Linear Programming in a Markov Decision Problem , 1970 .

[22]  Arie Hordijk,et al.  Semi-markov strategies in stochastic games , 1983 .