Iterative Empirical Game Solving via Single Policy Best Response

Policy-Space Response Oracles (PSRO) is a general algorithmic framework for learning policies in multiagent systems by interleaving empirical game analysis with deep reinforcement learning (Deep RL). At each iteration, Deep RL is invoked to train a best response to a mixture of opponent policies. The repeated application of Deep RL poses an expensive computational burden as we look to apply this algorithm to more complex domains. We introduce two variations of PSRO designed to reduce the amount of simulation required during Deep RL training. Both algorithms modify how PSRO adds new policies to the empirical game, based on learned responses to a single opponent policy. The first, Mixed-Oracles, transfers knowledge from previous iterations of Deep RL, requiring training only against the opponent’s newest policy. The second, Mixed-Opponents, constructs a pure-strategy opponent by mixing existing strategy’s action-value estimates, instead of their policies. Learning against a single policy mitigates variance in state outcomes that is induced by an unobserved distribution of opponents. We empirically demonstrate that these algorithms substantially reduce the amount of simulation during training required by PSRO, while producing equivalent or better solutions to the game.

[1]  P. Taylor,et al.  Evolutionarily Stable Strategies and Game Dynamics , 1978 .

[2]  S. Hart,et al.  A simple adaptive procedure leading to correlated equilibrium , 2000 .

[3]  G. Tesauro,et al.  Analyzing Complex Strategic Interactions in Multi-Agent Systems , 2002 .

[4]  Avrim Blum,et al.  Planning in the Presence of Cost Functions Controlled by an Adversary , 2003, ICML.

[5]  Michael H. Bowling,et al.  Bayes' Bluff: Opponent Modelling in Poker , 2005, UAI 2005.

[6]  Simon Parsons,et al.  A novel method for automatic strategy acquisition in N-player non-zero-sum games , 2006, AAMAS '06.

[7]  Michael P. Wellman Methods for Empirical Game-Theoretic Analysis , 2006, AAAI.

[8]  Michael P. Wellman,et al.  Stronger CDA strategies through empirical game-theoretic analysis and reinforcement learning , 2009, AAMAS.

[9]  Michael P. Wellman,et al.  Exploring Large Strategy Spaces in Empirical Game Modeling , 2009 .

[10]  David Silver,et al.  Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.

[11]  David Silver,et al.  A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning , 2017, NIPS.

[12]  Joel Z. Leibo,et al.  A multi-agent reinforcement learning model of common-pool resource appropriation , 2017, NIPS.

[13]  Max Jaderberg,et al.  Open-ended Learning in Symmetric Zero-sum Games , 2019, ICML.

[14]  Christos H. Papadimitriou,et al.  α-Rank: Multi-Agent Evaluation by Evolution , 2019, Scientific Reports.

[15]  Michael P. Wellman,et al.  Iterated Deep Reinforcement Learning in Games: History-Aware Training for Improved Stability , 2019, EC.

[16]  Roy Fox,et al.  Pipeline PSRO: A Scalable Approach for Finding Approximate Nash Equilibria in Large Games , 2020, NeurIPS.

[17]  N. Heess,et al.  A Generalized Training Approach for Multiagent Learning , 2019, ICLR.

[18]  Michael P. Wellman,et al.  Learning to Play against Any Mixture of Opponents , 2020, Frontiers in Artificial Intelligence.