An electronic-game framework for evaluating coevolutionary algorithms

One of the common artificial intelligence applications in electronic games consists of making an artificial agent learn how to execute some determined task successfully in a game environment. One way to perform this task is through machine learning algorithms capable of learning the sequence of actions required to win in a given game environment. There are several supervised learning techniques able to learn the correct answer for a problem through examples. However, when learning how to play electronic games, the correct answer might only be known by the end of the game, after all the actions were already taken. Thus, not being possible to measure the accuracy of each individual action to be taken at each time step. A way for dealing with this problem is through Neuroevolution, a method which trains Artificial Neural Networks using evolutionary algorithms. In this article, we introduce a framework for testing optimization algorithms with artificial agent controllers in electronic games, called EvoMan, which is inspired in the action-platformer game Mega Man II. The environment can be configured to run in different experiment modes, as single evolution, coevolution and others. To demonstrate some challenges regarding the proposed platform, as initial experiments we applied Neuroevolution using Genetic Algorithms and the NEAT algorithm, in the context of competitively coevolving two distinct agents in this game.

[1]  J. Togelius,et al.  Competitive coevolution in Ms. Pac-Man , 2013, 2013 IEEE Congress on Evolutionary Computation.

[2]  Risto Miikkulainen,et al.  Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.

[3]  Phyo Thiha Extending Robot Soccer Using NEAT , 2022 .

[4]  K. Araújo,et al.  Um Ambiente de Jogo Eletrônico para Avaliar Algoritmos Coevolutivos , 2015 .

[5]  Dario Floreano,et al.  Neuroevolution: from architectures to learning , 2008, Evol. Intell..

[6]  Jordan B. Pollack,et al.  Coevolution of a Backgammon Player , 1996 .

[7]  Risto Miikkulainen,et al.  Evolving multimodal behavior with modular neural networks in Ms. Pac-Man , 2014, GECCO.

[8]  Goldberg,et al.  Genetic algorithms , 1993, Robust Control Systems with Genetic Algorithms.

[9]  J. Krebs,et al.  Arms races between and within species , 1979, Proceedings of the Royal Society of London. Series B. Biological Sciences.

[10]  M. W Gardner,et al.  Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences , 1998 .

[11]  Risto Miikkulainen,et al.  Constructing competitive and cooperative agent behavior using coevolution , 2010, CIG.

[12]  Philip H. Mirvis Flow: The Psychology of Optimal Experience , 1991 .

[13]  Dan Boneh,et al.  On genetic algorithms , 1995, COLT '95.

[14]  Shital Solanki A review on back propagation algorithms for Feedforward Networks , 2012 .

[15]  J. Weiner,et al.  Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function , 2014 .

[16]  Peter Sinčák,et al.  Intelligent technologies - theory and applications : new trends in intelligent technologies , 2002 .