Learning a Strategy with Neural Approximated Temporal-Difference Methods in English Draughts

Having a large game-tree complexity and being EXPTIME-complete, English Draughts, recently weakly solved during almost two decades, is still hard to learn for intelligent computer agents. In this paper we present a Temporal-Difference method that is nonlinear neural approximated by a 4-layer multi-layer perceptron. We have built multiple English Draughts playing agents, each starting with a randomly initialized strategy, which use this method during self-play to improve their strategies. We show that the agents are learning by comparing their winning-quote relative to their parameters. Our best agent wins versus the computer draughts programs Neuro Draughts, KCheckers and CheckerBoard with the easych engine and looses to Chinook, GuiCheckers and CheckerBoard with the strong cake engine. Overall our best agent has reached an amateur league level.

[1]  Daniel Gooch,et al.  Communications of the ACM , 2011, XRDS.

[2]  Jonathan Schaeffer,et al.  Checkers Is Solved , 2007, Science.

[3]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..

[4]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[5]  Gerald Tesauro,et al.  Temporal difference learning and TD-Gammon , 1995, CACM.

[6]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[7]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .