Learning to Evaluate Chess Positions with Deep Neural Networks and Limited Lookahead

In this paper we propose a novel supervised learning approach for training Artificial Neural Networks (ANNs) to evaluate chess positions. The method that we present aims to train different ANN architectures to understand chess positions similarly to how highly rated human players do. We investigate the capabilities that ANNs have when it comes to pattern recognition, an ability that distinguishes chess grandmasters from more amateur players. We collect around 3,000,000 different chess positions played by highly skilled chess players and label them with the evaluation function of Stockfish, one of the strongest existing chess engines. We create 4 different datasets from scratch that are used for different classification and regression experiments. The results show how relatively simple Multilayer Perceptrons (MLPs) outperform Convolutional Neural Networks (CNNs) in all the experiments that we have performed. We also investigate two different board representations, the first one representing if a piece is present on the board or not, and the second one in which we assign a numerical value to the piece according to its strength. Our results show how the latter input representation influences the performances of the ANNs negatively in almost all experiments.

[1]  ImageNet Classification with Deep Convolutional Neural , 2013 .

[2]  Gerald Tesauro,et al.  TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play , 1994, Neural Computation.

[3]  Sebastian Thrun,et al.  Learning to Play the Game of Chess , 1994, NIPS.

[4]  Wiering,et al.  Learning to Play Draughts using temporal difference learning with neural networks and databases , 2004 .

[5]  H. J. van den Herik,et al.  Opponent Modelling and Commercial Games , 2005 .

[6]  Barak Oshri Predicting Moves in Chess using Convolutional Neural Networks , 2015 .

[7]  Andrew Tridgell,et al.  Learning to Play Chess Using Temporal Differences , 2000, Machine Learning.

[8]  Nathan S. Netanyahu,et al.  DeepChess: End-to-End Deep Neural Network for Automatic Learning in Chess , 2016, ICANN.

[9]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[10]  Matthew Lai,et al.  Giraffe: Using Deep Reinforcement Learning to Play Chess , 2015, ArXiv.

[11]  Marco Wiering,et al.  Neural-Fitted TD-Leaf Learning for Playing Othello With Structured Neural Networks , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[12]  David B. Fogel,et al.  Evolving neural networks to play checkers without relying on expert knowledge , 1999, IEEE Trans. Neural Networks.

[13]  Tom Schaul,et al.  Scalable Neural Networks for Board Games , 2009, ICANN.

[14]  Richard S. Sutton,et al.  Learning to predict by the methods of temporal differences , 1988, Machine Learning.

[15]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[16]  Hans J. Berliner,et al.  Experiences in Evaluation with BKG - A Program that Plays Backgammon , 1977, IJCAI.

[17]  David B. Fogel,et al.  Verifying Anaconda's expert rating by competing against Chinook: experiments in co-evolving a neural checkers player , 2002, Neurocomputing.

[18]  Hans J. Berliner,et al.  Some Necessary Conditions for a Master Chess Program , 1973, IJCAI.

[19]  M. Benrejeb,et al.  On the Use of Neural Network as a Universal Approximator , 2008 .