Policy iteration for perfect information stochastic mean payoff games with bounded first return times is strongly polynomial

Recent results of Ye and Hansen, Miltersen and Zwick show that policy iteration for one or two player (perfect information) zero-sum stochastic games, restricted to instances with a fixed discount rate, is strongly polynomial. We show that policy iteration for mean-payoff zero-sum stochastic games is also strongly polynomial when restricted to instances with bounded first mean return time to a given state. The proof is based on methods of nonlinear Perron-Frobenius theory, allowing us to reduce the mean-payoff problem to a discounted problem with state dependent discount rate. Our analysis also shows that policy iteration remains strongly polynomial for discounted problems in which the discount rate can be state dependent (and even negative) at certain states, provided that the spectral radii of the nonnegative matrices associated to all strategies are bounded from above by a fixed constant strictly less than 1.

[1]  L. Shapley,et al.  Stochastic Games* , 1953, Proceedings of the National Academy of Sciences.

[2]  R. Bellman,et al.  Dynamic Programming and Markov Processes , 1960 .

[3]  R. Bellman Dynamic programming. , 1957, Science.

[4]  R. Karp,et al.  On Nonterminating Stochastic Games , 1966 .

[5]  E. Denardo CONTRACTION MAPPINGS IN THE THEORY UNDERLYING DYNAMIC PROGRAMMING , 1967 .

[6]  E. Denardo,et al.  Multichain Markov Renewal Programs , 1968 .

[7]  S. Lippman,et al.  Stochastic Games with Perfect Information and Time Average Payoff , 1969 .

[8]  R. Nussbaum Convexity and log convexity for the spectral radius , 1986 .

[9]  Dimitri P. Bertsekas,et al.  Dynamic Programming: Deterministic and Stochastic Models , 1987 .

[10]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[11]  J. Filar,et al.  Competitive Markov Decision Processes , 1996 .

[12]  John Mallet-Paret,et al.  Eigenvalues for a class of homogeneous cone maps arising from max-plus operators , 2002 .

[13]  S. Gaubert,et al.  A policy iteration algorithm for zero-sum stochastic games with mean payoff , 2006 .

[14]  Oliver Friedmann,et al.  An Exponential Lower Bound for the Parity Game Strategy Improvement Algorithm as We Know it , 2009, 2009 24th Annual IEEE Symposium on Logic In Computer Science.

[15]  John Fearnley,et al.  Exponential Lower Bounds for Policy Iteration , 2010, ICALP.

[16]  S. Gaubert,et al.  A Collatz-Wielandt characterization of the spectral radius of order-preserving homogeneous maps on cones , 2011, 1112.5968.

[17]  Yinyu Ye,et al.  The Simplex and Policy-Iteration Methods Are Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate , 2011, Math. Oper. Res..

[18]  Stéphane Gaubert,et al.  Policy iteration algorithm for zero-sum multichain stochastic games with mean payoff and perfect information , 2012, ArXiv.

[19]  Peter Bro Miltersen,et al.  Strategy Iteration Is Strongly Polynomial for 2-Player Turn-Based Stochastic Games with a Constant Discount Factor , 2010, JACM.