Encyclopedia of Computer Graphics and Games

Computer game worlds are often inhabited by numerous artificial agents, which may be helpful, neutral, or hostile toward the player or players. Common approaches for defining the behavior of such agents include rule-based scripts and finite state machines (Buckland, 2005). However, agent behavior can also be generated automatically using evolutionary computation (EC; Eiben and Smith 2003). EC is a machine-learning technique that can be applied to sequential decision-making problems with large and partially observable state spaces, like video games. EC can create individual agents or teams, and these agents can be opponents or companions of human players. Agents can also be evolved to play games as a human would, in order to test the efficacy of EC techniques. EC can even create game artifacts besides agents, such as weapons. The reason EC is so flexible is that it requires little domain knowledge compared to traditional approaches. It is also capable of discovering surprising and effective behavior that a human expert would not think to program. If applied intelligently, this approach can even adapt to humans in a manner that keeps providing interesting and novel experiences for players. This article focuses mostly on discovering effective opponent behavior (since that is the focus of most research), although examples of other applications are also given when appropriate.

[1]  A. E. Eiben,et al.  Introduction to Evolutionary Computing , 2003, Natural Computing Series.

[2]  Risto Miikkulainen,et al.  HyperNEAT-GGP: a hyperNEAT-based atari general game player , 2012, GECCO '12.

[3]  Rudolf Kadlec,et al.  Pogamut 3 Can Assist Developers in Building AI (Not Only) for Their Videogame Agents , 2009, AGS.

[4]  Mat Buckland,et al.  Programming Game AI by Example , 2004 .

[5]  Rudolf Kadlec,et al.  Evoluce chovan´õ inteligentn´õch agentû u v poÿc´õtaÿcovych hrach Evolution of intelligent agent behaviour in computer games , 2008 .

[6]  Jean-Baptiste Mouret,et al.  Evolving neural networks that are both modular and regular: HyperNEAT plus the connection cost technique , 2014, GECCO.

[7]  Kenneth O. Stanley,et al.  Automatic Content Generation in the Galactic Arms Race Video Game , 2009, IEEE Transactions on Computational Intelligence and AI in Games.

[8]  Risto Miikkulainen,et al.  Evolving multimodal behavior with modular neural networks in Ms. Pac-Man , 2014, GECCO.

[9]  Larry D. Pyeatt,et al.  A comparison between cellular encoding and direct encoding for genetic neural networks , 1996 .

[10]  C. Darwin Charles Darwin The Origin of Species by means of Natural Selection or The Preservation of Favoured Races in the Struggle for Life , 2004 .

[11]  Christin Wirth,et al.  Blondie24 Playing At The Edge Of Ai , 2016 .

[12]  Karl Sims,et al.  Evolving virtual creatures , 1994, SIGGRAPH.

[13]  Daniele Loiacono,et al.  Evolving competitive car controllers for racing games with neuroevolution , 2009, GECCO '09.

[14]  Kenneth O. Stanley,et al.  Transfer learning through indirect encoding , 2010, GECCO '10.

[15]  Dan Lessin and Don Fussell and Risto Miikkulainen,et al.  Adopting Morphology to Multiple Tasks in Evolved Virtual Creatures , 2014, ALIFE.

[16]  S. Hyakin,et al.  Neural Networks: A Comprehensive Foundation , 1994 .

[17]  Risto Miikkulainen,et al.  Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.

[18]  Kenneth O. Stanley and Bobby D. Bryant and Risto Miikkulainen,et al.  Real-Time Evolution in the NERO Video Game (Winner of CIG 2005 Best Paper Award) , 2005, CIG.

[19]  Dave Cliff,et al.  Creatures: artificial life autonomous software agents for home entertainment , 1997, AGENTS '97.

[20]  Julian Togelius,et al.  Super mario evolution , 2009, 2009 IEEE Symposium on Computational Intelligence and Games.

[21]  Kenneth O. Stanley,et al.  A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks , 2009, Artificial Life.

[22]  M. Ponsen Automatically Generating Game Tactics via Evolutionary Learning , 2005 .