Stochastic games generalize Markov decision processes (MDPs) to a multiagent setting by allowing the state transitions to depend jointly on all player actions, and having rewards determined by multiplayer matrix games at each state. We consider the problem of computing Nash equilibria in stochastic games, the analogue of planning in MDPs. We begin by providing a simple generalization of finite-horizon value iteration that computes a Nash strategy for each player in general-sum stochastic games. The algorithm takes an arbitrary Nash selection function as input, which allows the translation of local choices between multiple Nash equilibria into the selection of a single global Nash equilibrium.
Our main technical result is an algorithm for computing near-Nash equilibria in large or infinite state spaces. This algorithm builds on our finite-horizon value iteration algorithm, and adapts the sparse sampling methods of Kearns, Mansour-and Ng (1999) to stochastic games. We conclude by describing a counterexample showing that infinite-horizon discounted value iteration, which was shown by Shapley to converge in the zero-sum case (a result we give extend slightly here), does not converge in the general-sum case.
[1]
L. Shapley,et al.
Stochastic Games*
,
1953,
Proceedings of the National Academy of Sciences.
[2]
Michael P. Wellman,et al.
Multiagent Reinforcement Learning: Theoretical Framework and an Algorithm
,
1998,
ICML.
[3]
David A. McAllester,et al.
Approximate Planning for Factored POMDPs using Belief State Simplification
,
1999,
UAI.
[4]
Craig Boutilier,et al.
Continuous Value Function Approximation for Sequential Bidding Policies
,
1999,
UAI.
[5]
Ronen I. Brafman,et al.
A Near-Optimal Poly-Time Algorithm for Learning a class of Stochastic Games
,
1999,
IJCAI.
[6]
Ronen I. Brafman,et al.
A near-optimal polynomial time algorithm for learning in certain classes of stochastic games
,
2000,
Artif. Intell..
[7]
Craig Boutilier,et al.
Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
,
2000
.