Verifying Anaconda's expert rating by competing against Chinook: experiments in co-evolving a neural checkers player

Abstract Since the early days of artificial intelligence, there has been interest in having a computer teach itself how to play a game of skill, like checkers, at a level that is competitive with human experts. To be truly noteworthy, such efforts should minimize the amount of human intervention in the learning process. Recently, co-evolution has been used to evolve a neural network (called Anaconda) that, when coupled with a minimax search, can evaluate checker-boards and play to the level of a human expert, as indicated by its rating of 2045 on an international web site for playing checkers. The neural network uses only the location, type, and number of pieces on the board as input. No other features that would require human expertise are included. Experiments were conducted to verify the neural network's expert rating by competing it in 10 games against a “novice-level” version of Chinook, a world-champion checkers program. The neural network had 2 wins, 4 losses, and 4 draws in the 10-game match. Based on an estimated rating of Chinook at the novice level, the results corroborate Anaconda's expert rating.