Co-evolving checkers-playing programs using only win, lose, or draw

This paper details efforts made to evolve neural networks for playing checkers. In particular, multilayer perceptrons were used as evaluation functions to compare the worth of alternative boards. The weights of these neural networks were evolved in a co-evolutionary manner, which networks competing only against other extant networks in the population. No external 'expert system' was used for comparison or evaluation. Feedback to the networks was limited to an overall point score based on the outcome of 10 games at each generation. No attempt was made to give credit to moves in isolation or to prescribe useful features beyond the possible inclusion of piece differential. When played in 100 games against rated human opponents, the final rating for the best evolved network was 1750, placing it as a Class B player. This level of performance is competitive with many humans.