Aligning Superhuman AI with Human Behavior: Chess as a Model System

As artificial intelligence becomes increasingly intelligent---in some cases, achieving superhuman performance---there is growing potential for humans to learn from and collaborate with algorithms. However, the ways in which AI systems approach problems are often different from the ways people do, and thus may be uninterpretable and hard to learn from. A crucial step in bridging this gap between human and artificial intelligence is modeling the granular actions that constitute human behavior, rather than simply matching aggregate human performance. We pursue this goal in a model system with a long history in artificial intelligence: chess. The aggregate performance of a chess player unfolds as they make decisions over the course of a game. The hundreds of millions of games played online by players at every skill level form a rich source of data in which these decisions, and their exact context, are recorded in minute detail. Applying existing chess engines to this data, including an open-source implementation of AlphaZero, we find that they do not predict human moves well. We develop and introduce Maia, a customized version of AlphaZero trained on human chess games, that predicts human moves at a much higher accuracy than existing engines, and can achieve maximum accuracy when predicting decisions made by players at a specific skill level in a tuneable way. For a dual task of predicting whether a human will make a large mistake on the next move, we develop a deep neural network that significantly outperforms competitive baselines. Taken together, our results suggest that there is substantial promise in designing artificial intelligence systems with human collaboration in mind by first accurately modeling granular human decision-making.

[1]  Demis Hassabis,et al.  Mastering the game of Go without human knowledge , 2017, Nature.

[2]  Zachary C. Lipton,et al.  The mythos of model interpretability , 2018, Commun. ACM.

[3]  Eric Horvitz,et al.  Principles of mixed-initiative user interfaces , 1999, CHI '99.

[4]  Jure Leskovec,et al.  Human Decisions and Machine Predictions , 2017, The quarterly journal of economics.

[5]  Finale Doshi-Velez,et al.  A Roadmap for a Rigorous Science of Interpretability , 2017, ArXiv.

[6]  Gavriel Salvendy,et al.  Handbook of Human Factors and Ergonomics: Salvendy/Handbook of Human Factors 4e , 2012 .

[7]  Albert T. Corbett,et al.  Intelligent Tutoring Systems , 1985, Science.

[8]  Pieter Abbeel,et al.  Third-Person Imitation Learning , 2017, ICLR.

[9]  Barry Kirwan,et al.  Human Reliability Assessment , 2008 .

[10]  Jon M. Kleinberg,et al.  Assessing Human Error Against a Benchmark of Perfection , 2016, KDD.

[11]  Kenneth W. Regan,et al.  Measuring Level-K Reasoning, Satisficing, and Human Error in Game-Play Data , 2015, 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA).

[12]  Demis Hassabis,et al.  A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play , 2018, Science.

[13]  N. Charness The impact of chess research on cognitive science , 1992 .

[14]  Garry Kasparov,et al.  Chess, a Drosophila of reasoning , 2018, Science.

[15]  Cynthia Rudin,et al.  Interpretable classification models for recidivism prediction , 2015, 1503.07810.

[16]  John McCarthy,et al.  Chess as the Drosophila of AI , 1990 .

[17]  Gavriel Salvendy,et al.  Handbook of Human Factors and Ergonomics , 2005 .

[18]  Sebastian Thrun,et al.  Dermatologist-level classification of skin cancer with deep neural networks , 2017, Nature.