Evolution of Human-Competitive Agents in Modern Computer Games

Modern computer games have become far more sophisticated than their ancestors. In this process the requirements to the intelligence of artificial gaming characters have become more and more complex. This paper describes an approach to evolve human-competitive artificial players for modern computer games. The agents are evolved from scratch and successfully learn how to survive and defend themselves in the game. Agents trained with standard evolution against a training partner and agents trained by coevolution are presented. Both types of agents were able to defeat or even to dominate the original agents supplied by the game. Furthermore, we have made a detailed analysis of the obtained results to gain more insight into the resulting agents.

[1]  Kenneth O. Stanley and Bobby D. Bryant and Risto Miikkulainen,et al.  Real-Time Evolution in the NERO Video Game (Winner of CIG 2005 Best Paper Award) , 2005, CIG.

[2]  Alexander Nareyek A Planning Model for Agents in Dynamic and Uncertain Real-Time Environments* , 1998 .

[3]  Oliver Kramer,et al.  Evolution of Reactive Rules in Multi Player Computer Games Based on Imitation , 2005, ICNC.

[4]  Christian Bauckhage,et al.  Learning Human-Like Movement Behavior for Computer Games , 2004 .

[5]  Sushil J. Louis,et al.  Using a genetic algorithm to tune first-person shooter bots , 2004, Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753).

[6]  Christian Bauckhage,et al.  Combining Self Organizing Maps and Multilayer Perceptrons to Learn Bot-Behaviour for a Commercial Game , 2003, GAME-ON.

[7]  John E. Laird,et al.  It knows what you're going to do: adding anticipation to a Quakebot , 2001, AGENTS '01.

[8]  Christian Bauckhage,et al.  Combining Self Organizing Maps and Multilayer Perceptrons to Learn Bot-Behavior for a Commercial Computer Game , 2003 .

[9]  Nick Hawes,et al.  An Anytime Planning Agent For Computer Game Worlds , 2002 .

[10]  John E. Laird,et al.  Soar-RL: integrating reinforcement learning with Soar , 2005, Cognitive Systems Research.

[11]  Hans-Paul Schwefel,et al.  Evolution strategies – A comprehensive introduction , 2002, Natural Computing.

[12]  Eric O. Postma,et al.  TEAM: The Team-Oriented Evolutionary Adaptability Mechanism , 2004, ICEC.