Polygames: Improved Zero Learning

Since DeepMind's AlphaZero, Zero learning quickly became the state-of-the-art method for many board games. It can be improved using a fully convolutional structure (no fully connected layer). Using such an architecture plus global pooling, we can create bots independent of the board size. The training can be made more robust by keeping track of the best checkpoints during the training and by training against them. Using these features, we release Polygames, our framework for Zero learning, with its library of games and its checkpoints. We won against strong humans at the game of Hex in 19x19, which was often said to be untractable for zero learning; and in Havannah. We also won several first places at the TAAI competitions.

[1]  Jacques Pitrat,et al.  Realization of a general game-playing program , 1968, IFIP Congress.

[2]  Rémi Coulom,et al.  Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search , 2006, Computers and Games.

[3]  Csaba Szepesvári,et al.  Bandit Based Monte-Carlo Planning , 2006, ECML.

[4]  Édouard Bonnet,et al.  On the complexity of connection games , 2016, Theor. Comput. Sci..

[5]  Michael McCloskey,et al.  Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem , 1989 .

[6]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[7]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[8]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Henrik Johansson,et al.  Self-learning Robots using Evolutionary and Genetic Algorithms , 2011 .

[10]  Trevor Darrell,et al.  Fully Convolutional Networks for Semantic Segmentation , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Richard Lorentz Early Playout Termination in MCTS , 2015, ACG.

[12]  Gary Marcus,et al.  Innateness, AlphaZero, and Artificial Intelligence , 2018, ArXiv.

[13]  Yuandong Tian,et al.  ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero , 2019, ICML.

[14]  Olivier Buffet,et al.  Optimistic Heuristics for MineSweeper , 2013 .

[15]  Demis Hassabis,et al.  Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm , 2017, ArXiv.

[16]  David Auger,et al.  The Frontier of Decidability in Partially Observable Recursive Games , 2012, Int. J. Found. Comput. Sci..

[17]  David J. Wu,et al.  Accelerating Self-Play Learning in Go , 2019, ArXiv.

[18]  Jack van Rijswijck,et al.  Set colouring games , 2006 .