Single- and multi-objective game-benchmark for evolutionary algorithms

Despite a large interest in real-world problems from the research field of evolutionary optimisation, established benchmarks in the field are mostly artificial. We propose to use game optimisation problems in order to form a benchmark and implement function suites designed to work with the established COCO benchmarking framework. Game optimisation problems are real-world problems that are safe, reasonably complex and at the same time practical, as they are relatively fast to compute. We have created four function suites based on two optimisation problems previously published in the literature (TopTrumps and MarioGAN). For each of the applications, we implemented multiple instances of several scalable single- and multi-objective functions with different characteristics and fitness landscapes. Our results prove that game optimisation problems are interesting and challenging for evolutionary algorithms.

[1]  Lothar Thiele,et al.  Comparison of Multiobjective Evolutionary Algorithms: Empirical Results , 2000, Evolutionary Computation.

[2]  Olivier Teytaud,et al.  Exploring the MLDA benchmark on the nevergrad platform , 2019, GECCO.

[3]  Jonathan E. Fieldsend,et al.  A Suite of Computationally Expensive Shape Optimisation Problems Using Computational Fluid Dynamics , 2018, PPSN.

[4]  Dimo Brockhoff,et al.  Mixed-integer benchmark problems for single- and bi-objective optimization , 2019, GECCO.

[5]  Simon M. Lucas,et al.  The N-Tuple bandit evolutionary algorithm for automatic game improvement , 2017, 2017 IEEE Congress on Evolutionary Computation (CEC).

[6]  Heike Trautmann,et al.  Multimodality in Multi-objective Optimization - More Boon than Bane? , 2019, EMO.

[7]  Julian Togelius,et al.  Evolving levels for Super Mario Bros using grammatical evolution , 2012, 2012 IEEE Conference on Computational Intelligence and Games (CIG).

[8]  Günter Rudolph,et al.  Demonstrating the Feasibility of Automatic Game Balancing , 2016, GECCO.

[9]  Julian Togelius,et al.  The Mario AI Championship 2009-2012 , 2013, AI Mag..

[10]  Anne Auger,et al.  COCO: The Bi-objective Black Box Optimization Benchmarking (bbob-biobj) Test Suite , 2016, ArXiv.

[11]  Bernd Bischl,et al.  Exploratory landscape analysis , 2011, GECCO '11.

[12]  Julian Togelius,et al.  The search-based approach , 2016 .

[13]  Vanessa Volz,et al.  Uncertainty Handling in Surrogate Assisted Optimisation of Games , 2019, KI - Künstliche Intelligenz.

[14]  Tušar On Using Real-World Problems for Benchmarking Multiobjective Optimization Algorithms , 2019 .

[15]  Tom Schaul,et al.  StarCraft II: A New Challenge for Reinforcement Learning , 2017, ArXiv.

[16]  Santiago Ontañón,et al.  Understanding mario: an evaluation of design metrics for platformers , 2017, FDG.

[17]  Heike Trautmann,et al.  Automated Algorithm Selection on Continuous Black-Box Problems by Combining Exploratory Landscape Analysis and Machine Learning , 2017, Evolutionary Computation.

[18]  Simon M. Lucas,et al.  Evolving mario levels in the latent space of a deep convolutional generative adversarial network , 2018, GECCO.

[19]  Alessandro Canossa,et al.  Towards a Procedural Evaluation Technique: Metrics for Level Design , 2015, FDG.

[20]  Anne Auger,et al.  COCO: a platform for comparing continuous optimizers in a black-box setting , 2016, Optim. Methods Softw..

[21]  Julian Togelius,et al.  The Mario AI Championship , 2010, Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games.

[22]  Anne Auger,et al.  Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions , 2009 .

[23]  Mike Preuss,et al.  Orchestrating Game Generation , 2019, IEEE Transactions on Games.

[24]  Marco Laumanns,et al.  Scalable Test Problems for Evolutionary Multiobjective Optimization , 2005, Evolutionary Multiobjective Optimization.