Empirical Game-Theoretic Analysis of Chaturanga

We analyze 4-player chaturanga (an ancient variant of chess) using the methods of empirical game theory. Like chess, this game is computationally challenging due to an extremely large strategy space. From the perspective of game theory, it is more interesting than chess because it has more than 2 players. Removing the 2-player restriction allows multiple equilibria and other complex strategic interactions that require the full tool set of game theory. The major challenge for applying game theoretic methods to such a large game is to identify a tractable subset of the game for detailed analysis that captures the essence of the strategic interactions. We argue that the notion of strategic independence holds significant promise for scaling game theory to large games. We present preliminary results based on data from two sets of strategies for chaturanga. These results suggest that strategic independence is present in chaturanga, and demonstrate some possible ways to exploit it.

[1]  Michael P. Wellman,et al.  Searching for Walverine 2005 , 2005, AMEC@AAMAS/TADA@IJCAI.

[2]  Michael L. Littman,et al.  Graphical Models for Game Theory , 2001, UAI.

[3]  Michael P. Wellman,et al.  Empirical mechanism design: methods, with application to a supply-chain scenario , 2006, EC '06.

[4]  Raymond T. Stefani,et al.  Survey of the major world sports rating systems , 1997 .

[5]  G. Tesauro,et al.  Analyzing Complex Strategic Interactions in Multi-Agent Systems , 2002 .

[6]  Ivan Bratko,et al.  Why Minimax Works: An Alternative Explanation , 2005, IJCAI.

[7]  Michael P. Wellman,et al.  STRATEGIC INTERACTIONS IN A SUPPLY CHAIN GAME , 2005, Comput. Intell..

[8]  Simon Parsons,et al.  A novel method for automatic strategy acquisition in N-player non-zero-sum games , 2006, AAMAS '06.

[9]  Michael P. Wellman,et al.  Self-Confirming Price Prediction for Bidding in Simultaneous Ascending Auctions , 2005, UAI.

[10]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[11]  Richard K. Belew,et al.  Methods for Competitive Co-Evolution: Finding Opponents Worth Beating , 1995, ICGA.

[12]  Daphne Koller,et al.  Multi-Agent Influence Diagrams for Representing and Solving Games , 2001, IJCAI.

[13]  Michael P. Wellman,et al.  Exploring bidding strategies for market-based scheduling , 2003, EC '03.

[14]  Arthur L. Samuel,et al.  Some Studies in Machine Learning Using the Game of Checkers , 1967, IBM J. Res. Dev..

[15]  R. Korf,et al.  Multiplayer games: algorithms and approaches , 2003 .

[16]  Andrew Tridgell,et al.  Learning to Play Chess Using Temporal Differences , 2000, Machine Learning.