A STRATEGIC METAGAME PLAYER FOR GENERAL CHESS‐LIKE GAMES

This paper introduces METAGAMER, the first program designed within the paradigm of Metagame‐playing (Metagame). This program plays games in the class of symmetric chess‐like games, which includes chess, Chinese chess, checkers, draughts, and Shogi. METAGAMER takes as input the rules of a specific game and analyzes those rules to construct an efficient representation and an evaluation function for that game; they are used by a generic search engine. The strategic analysis performed by METAGAMER relates a set of general knowledge sources to the details of the particular game. Among other properties, this analysis determines the relative value of the different pieces in a given game. Although METAGAMER does not learn from experience, the values resulting from its analysis are qualitatively similar to values used by experts on known games and are sufficient to produce competitive performance the first time METAGAMER plays a new game. Besides being the first Metagame‐playing program, this is the first program to have derived useful piece values directly from analysis of the rules of different games. This paper describes the knowledge implemented in METAGAMER, illustrates the piece values METAGAMER derives for chess and checkers, and discusses experiments with METAGAMER on both existing and newly generated games.

[1]  Arthur L. Samuel,et al.  Some Studies in Machine Learning Using the Game of Checkers , 1967, IBM J. Res. Dev..

[2]  A. L. Samuel,et al.  Some studies in machine learning using the game of checkers. II: recent progress , 1967 .

[3]  M. M. Botvinnik,et al.  Computers, Chess and Long-Range Planning , 1970, Heidelberg Science Library.

[4]  Hans J. Berliner,et al.  Chess as problem solving: the development of a tactics analyzer. , 1975 .

[5]  Nils J. Nilsson,et al.  Principles of Artificial Intelligence , 1980, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Nicholas V. Findler,et al.  A Multi-Strategy Gaming Environment. , 1982 .

[7]  David E. Wilkins,et al.  Using Knowledge to Control Tree Searching , 1982, Artif. Intell..

[8]  Russell M. Church,et al.  Plans, goals, and search strategies for the selection of a move in chess , 1983 .

[9]  M. A. Bramer Computer Game - Playing: Theory and Practice , 1983 .

[10]  Jack Mostow,et al.  A Problem-Solver for Making Advice Operational , 1983, AAAI.

[11]  Peter W. Frey,et al.  Chess Skill in Man and Machine , 1984, Springer New York.

[12]  Richard S. Sutton,et al.  Temporal credit assignment in reinforcement learning , 1984 .

[13]  R. J. Beynon,et al.  Computers , 1985, Comput. Appl. Biosci..

[14]  Editors , 1986, Brain Research Bulletin.

[15]  Thomas Ellman,et al.  Approximate Theory Formation: An Explanation-Based Approach , 1988, AAAI.

[16]  Sanjoy Mahajan,et al.  A Pattern Classification Approach to Evaluation Function Learning , 1988, Artif. Intell..

[17]  Thomas G. Dietterich,et al.  A Study of Explanation-Based Methods for , 1989 .

[18]  Prasad Tadepalli,et al.  Lazy ExplanationBased Learning: A Solution to the Intractable Theory Problem , 1989, IJCAI.

[19]  Monty Newborn,et al.  How Computers Play Chess , 1990, J. Int. Comput. Games Assoc..

[20]  Goals , 1990 .

[21]  Ken Chen,et al.  Smart game board and go explorer: a study in software and knowledge engineering , 1990, Commun. ACM.

[22]  Bruce Abramson,et al.  Expected-Outcome: A General Model of Static Evaluation , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[23]  Michael Freed,et al.  Plan Debugging in an Intentional System , 1991, IJCAI.

[24]  Robert Levinson,et al.  Adaptive Pattern-Oriented Chess , 1991, AAAI Conference on Artificial Intelligence.

[25]  Duane Szafron,et al.  REVIVING THE GAME OF CHECKERS , 1991 .

[26]  Jon Doyle,et al.  Preferential Semantics for Goals , 1991, AAAI.

[27]  William Tunstall-Pedoe,et al.  Genetic Algorithms Optimizing Evaluation Functions , 1991, J. Int. Comput. Games Assoc..

[28]  Paul E. Utgoff,et al.  Constructive Induction on Domain Information , 1991, AAAI.

[29]  J. P. Callan Adaptive Case-Based Reasoning , 1991 .

[30]  Paul E. Utgoff,et al.  Automatic Feature Generation for Problem Solving Systems , 1992, ML.

[31]  Alan Bundy,et al.  An Adaptation of Proof-Planning to Declarer Play in Bridge , 1992, ECAI.

[32]  Jaap van den Herik,et al.  Heuristic programming in Artificial Intelligence 3: the third computer olympiad , 1992 .

[33]  B. Pell METAGAME : A New Challenge for Games and Learning , 1992 .

[34]  Michael Gherrity,et al.  A game-learning machine , 1993 .

[35]  Peter J. Angeline,et al.  Competitive Environments Evolve Better Solutions for Complex Tasks , 1993, ICGA.

[36]  Dana S. Nau,et al.  Strategic planning for imperfect-information games , 1993 .

[37]  Terrence J. Sejnowski,et al.  Temporal Difference Learning of Position Evaluation in the Game of Go , 1993, NIPS.

[38]  Manny Rayner,et al.  Pragmatic reasoning in bridge , 1993 .

[39]  Robert Levinson,et al.  MORPH, an Experience-Based Adaptive Chess System , 1993, J. Int. Comput. Games Assoc..

[40]  Robert Levinson,et al.  DISTANCE: Toward the Unification of Chess Knowledge , 1993, J. Int. Comput. Games Assoc..

[41]  James Patrick Callan,et al.  Knowledge-based feature generation for inductive learning , 1993 .

[42]  Gerald Tesauro,et al.  TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play , 1994, Neural Computation.

[43]  E. Baum,et al.  Best Play for Imperfect Players and Game Tree Search; part I - theory , 1995 .

[44]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.