A Simulation-Based Policy Iteration Algorithm for Average Cost Unichain Markov Decision Processes

In this paper, we propose a simulation-based policy iteration algorithm for Markov decision process (MDP) problems with average cost criterion under the unichain assumption, which is a weaker assumption than found in previous work. In this algorithm, 1) the problem is converted to a stochastic shortest path problem and a reference state can be chosen as any recurrent state under the current policy, in which case the reference state is not necessarily the same from iteration to iteration; 2) the differential costs are evaluated indirectly by a temporal-difference learning scheme; 3) transient states are selected as the initial states for sample paths and the inverse of the visit count is chosen as the stepsize to improve the performance. Numerical results using the algorithm for an inventory control problem are also provided.