Evolution of Reactive Rules in Multi Player Computer Games Based on Imitation

Observing purely reactive situations in modern computer games, one can see that in many cases few, simple rules are sufficient to perform well in the game. In spite of this, the programming of an artificial opponent is still a hard and time consuming task in the way it is done for the most games today. In this paper we propose a system in which no direct programming of the behaviour of the opponents is necessary. Instead, rules are gained by observing human players and then evaluated and optimised by an evolutionary algorithm to optimise the behaviour. We will show that only little learning effort is required to be competitive in reactive situations. In the course of our experiments our system proved to generate better artificial players than the original ones supplied with the game.

[1]  Paul Wintz,et al.  Instructor's manual for digital image processing , 1987 .

[2]  Alexander Nareyek A Planning Model for Agents in Dynamic and Uncertain Real-Time Environments* , 1998 .

[3]  Matthias Rauterberg,et al.  Entertainment Computing – ICEC 2007 , 2007, Lecture Notes in Computer Science.

[4]  Hans-Paul Schwefel,et al.  Evolution and Optimum Seeking: The Sixth Generation , 1993 .

[5]  Eric O. Postma,et al.  TEAM: The Team-Oriented Evolutionary Adaptability Mechanism , 2004, ICEC.

[6]  Christian Bauckhage,et al.  Imitation learning at all levels of game-AI , 2004 .

[7]  John E. Laird,et al.  Soar-RL: integrating reinforcement learning with Soar , 2005, Cognitive Systems Research.

[8]  David Wolfe Distinguishing Gamblers from Investors at the Blackjack Table , 2002, Computers and Games.

[9]  Hans-Paul Schwefel,et al.  Evolution strategies – A comprehensive introduction , 2002, Natural Computing.

[10]  Andreas Goebels,et al.  Stigmergetic Communication for Cooperative Agent Routing in Virtual Environments , 2005 .

[11]  Christian Bauckhage,et al.  Combining Self Organizing Maps and Multilayer Perceptrons to Learn Bot-Behavior for a Commercial Computer Game , 2003 .

[12]  Hans-Paul Schwefel,et al.  Evolution and optimum seeking , 1995, Sixth-generation computer technology series.

[13]  Christian Bauckhage,et al.  Combining Self Organizing Maps and Multilayer Perceptrons to Learn Bot-Behaviour for a Commercial Game , 2003, GAME-ON.

[14]  John E. Laird,et al.  It knows what you're going to do: adding anticipation to a Quakebot , 2001, AGENTS '01.

[15]  Christian Bauckhage,et al.  Learning Human-Like Movement Behavior for Computer Games , 2004 .

[16]  Nick Hawes,et al.  An Anytime Planning Agent For Computer Game Worlds , 2002 .