Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess

It is non-trivial to design engaging and balanced sets of game rules. Modern chess has evolved over centuries, but without a similar recourse to history, the consequences of rule changes to game dynamics are difficult to predict. AlphaZero provides an alternative in silico means of game balance assessment. It is a system that can learn near-optimal strategies for any rule set from scratch, without any human supervision, by continually learning from its own experience. In this study we use AlphaZero to creatively explore and design new chess variants. There is growing interest in chess variants like Fischer Random Chess, because of classical chess's voluminous opening theory, the high percentage of draws in professional play, and the non-negligible number of games that end while both players are still in their home preparation. We compare nine other variants that involve atomic changes to the rules of chess. The changes allow for novel strategic and tactical patterns to emerge, while keeping the games close to the original. By learning near-optimal strategies for each variant with AlphaZero, we determine what games between strong human players might look like if these variants were adopted. Qualitatively, several variants are very dynamic. An analytic comparison show that pieces are valued differently between variants, and that some variants are more decisive than classical chess. Our findings demonstrate the rich possibilities that lie beyond the rules of modern chess.

[1]  Julian Togelius,et al.  AI-based playtesting of contemporary board games , 2017, FDG.

[2]  S. Barry Cooper,et al.  Digital Computers Applied to Games , 2013 .

[3]  Zoran Popovic,et al.  Evaluating Competitive Game Balance with Restricted Play , 2012, AIIDE.

[4]  A. D. D. Groot Thought and Choice in Chess , 1978 .

[5]  Felix Motzoi,et al.  Global optimization of quantum dynamics with AlphaZero deep exploration , 2019, npj Quantum Information.

[6]  Sushil J. Louis,et al.  Using coevolution to understand and validate game balance in continuous games , 2008, GECCO '08.

[7]  Hiroyuki Iida,et al.  A metric for entertainment of boardgames: its implication for evolution of chess variants , 2002, IWEC.

[8]  Vincent Corruble,et al.  Automatic computer game balancing: a reinforcement learning approach , 2005, AAMAS '05.

[9]  Johannes Fürnkranz,et al.  Learning to Play the Chess Variant Crazyhouse Above World Champion Level With Deep Neural Networks and Human Data , 2020, Frontiers in Artificial Intelligence.

[10]  Demis Hassabis,et al.  A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play , 2018, Science.

[11]  David J. C. MacKay,et al.  Information Theory, Inference, and Learning Algorithms , 2004, IEEE Transactions on Information Theory.

[12]  John Gollon Chess variations, ancient, regional, and modern , 1968 .

[13]  H. Murray A History of Chess , 1913 .

[14]  Demis Hassabis,et al.  Mastering Atari, Go, chess and shogi by planning with a learned model , 2019, Nature.

[15]  Hiroyuki Iida,et al.  REFINEMENT AND COMPLEXITY IN THE EVOLUTION OF CHESS , 2007 .

[16]  Zahid Halim,et al.  Evolutionary Search in the Space of Rules for Creation of New Two-Player Board Games , 2014, Int. J. Artif. Intell. Tools.

[17]  Haitham Bou-Ammar,et al.  Balancing Two-Player Stochastic Games with Soft Q-Learning , 2018, IJCAI.