Learning Time Allocation Using Neural Networks

The strength of a game-playing program is mainly based on the adequacy of the evaluation function and the efficacy of the search algorithm. This paper investigates how temporal difference learning and genetic algorithms can be used to improve various decisions made during game-tree search. The existent TD algorithms are not directly suitable for learning search decisions. Therefore we propose a modified update rule that uses the TD error of the evaluation function to shorten the lag between two rewards. The genetic algorithms can be applied directly to learn search decisions. For our experiments we selected the problem of time allocation from the set of search decisions. On each move the player can decide on a certain search depth, being constrained by the amount of time left. As testing ground, we used the game of Lines of Action, which has roughly the same complexity as Othello. From the results we conclude that both the TD and the genetic approach lead to good results when compared to the existent time-allocation techniques. Finally, a brief discussion of the issues that can emerge when the algorithms are applied to more complex search decisions is given.

[1]  Eric B. Baum,et al.  A Bayesian Approach to Relevance in Game Playing , 1997, Artif. Intell..

[2]  Simon Parsons,et al.  Do the right thing - studies in limited rationality by Stuart Russell and Eric Wefald, MIT Press, Cambridge, MA, £24.75, ISBN 0-262-18144-4 , 1994, The Knowledge Engineering Review.

[3]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[4]  Thomas S. Anantharaman,et al.  Evaluation Tuning for Computer Chess: Linear Discriminant Methods , 1997, J. Int. Comput. Games Assoc..

[5]  Michael I. Jordan,et al.  Reinforcement Learning Algorithm for Partially Observable Markov Decision Problems , 1994, NIPS.

[6]  Jonathan Schaeffer,et al.  Kasparov versus Deep Blue: The Rematch , 1997, J. Int. Comput. Games Assoc..

[7]  Bradley C. Kuszmaul,et al.  The STARTECH Massively-Parallel Chess Program , 1995, J. Int. Comput. Games Assoc..

[8]  Sebastian Thrun,et al.  Learning to Play the Game of Chess , 1994, NIPS.

[9]  Vivek S. Borkar,et al.  Actor-Critic - Type Learning Algorithms for Markov Decision Processes , 1999, SIAM J. Control. Optim..

[10]  Yasser Seirawan The Kasparov - Deep Blue Games , 1997, J. Int. Comput. Games Assoc..

[11]  Ernst A. Heinz Adaptive Null-Move Pruning , 1999, J. Int. Comput. Games Assoc..

[12]  Hiroyuki Iida,et al.  Potential Applications of Opponent-Model Search , 1994, J. Int. Comput. Games Assoc..

[13]  Shigenobu Kobayashi,et al.  An Analysis of Actor/Critic Algorithms Using Eligibility Traces: Reinforcement Learning with Imperfect Value Function , 1998, ICML.

[14]  Donald F. Beal,et al.  Temporal Difference Learning for Heuristic Search and Game Playing , 2000, Inf. Sci..

[15]  Andrew Tridgell,et al.  Experiments in Parameter Learning Using Temporal Differences , 1998, J. Int. Comput. Games Assoc..

[16]  David Carmel,et al.  Incorporating Opponent Models into Adversary Search , 1996, AAAI/IAAI, Vol. 1.

[17]  John N. Tsitsiklis,et al.  Actor-Critic Algorithms , 1999, NIPS.

[18]  Arthur L. Samuel,et al.  Some studies in machine learning using the game of checkers , 2000, IBM J. Res. Dev..

[19]  John J. Grefenstette,et al.  Evolutionary Algorithms for Reinforcement Learning , 1999, J. Artif. Intell. Res..

[20]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[21]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[22]  Gerald Tesauro,et al.  Practical Issues in Temporal Difference Learning , 1992, Mach. Learn..

[23]  Daishi Harada,et al.  Extended abstract: Learning search strategies , 1999 .

[24]  Shaul Markovitch,et al.  LEARNING OF RESOURCE ALLOCATION STRATEGIES FOR GAME PLAYING , 1993, IJCAI.

[25]  David B. Fogel,et al.  Co-evolving checkers-playing programs using only win, lose, or draw , 1999, Defense, Security, and Sensing.

[26]  Risto Miikkulainen,et al.  Hierarchical evolution of neural networks , 1998, 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360).

[27]  Yishay Mansour,et al.  Policy Gradient Methods for Reinforcement Learning with Function Approximation , 1999, NIPS.

[28]  Robert M. Hyatt,et al.  Using Time Wisely , 1984, J. Int. Comput. Games Assoc..

[29]  Stuart J. Russell,et al.  Do the right thing - studies in limited rationality , 1991 .