Evolutionary strategies for novelty-based online neuroevolution in swarm robotics

Neuroevolution in robot controllers through objective-based genetic and evolutionary algorithms is a well-known methodology for studying the dynamics of evolution in swarms of simple robots. A robot within a swarm is able to evolve the simple neural network embedded as its controller by also taking into account how other robots are performing the task at hand. In online scenarios, this is obtained through inter-robot communications of the best performing genomes (i.e. representation of the weights of their embedded neural network). While many experiments from previous work have shown the soundness of this approach, we aim to extend this methodology using a novelty-based metric, so to be able to analyze different genome exchange strategies within a simulated swarm of robots in deceptive tasks or scenarios in which it is difficult to model a proper objective function to drive evolution. In particular, we want to study how different information sharing approaches affect the evolution. To do so we developed and tested three different ways to exchange genomes and information between robots using novelty driven evolution and we compared them using a recent variation of the mEDEA (minimal Environment-driven Distributed Evolutionary Algorithm) algorithm. As the deceptiveness and the complexity of the task increases, our proposed novelty-driven strategies display better performance in foraging scenarios.

[1]  Risto Miikkulainen,et al.  Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.

[2]  Anders Lyhne Christensen,et al.  Novelty-Driven Cooperative Coevolution , 2017, Evolutionary Computation.

[3]  Gregory Hornby,et al.  ALPS: the age-layered population structure for reducing the problem of premature convergence , 2006, GECCO.

[4]  David E. Goldberg,et al.  Genetic Algorithms with Sharing for Multimodalfunction Optimization , 1987, ICGA.

[5]  Nicolas Bredèche,et al.  Simbad: An Autonomous Robot Simulation Package for Education and Research , 2006, SAB.

[6]  D. E. Goldberg,et al.  Simple Genetic Algorithms and the Minimal, Deceptive Problem , 1987 .

[7]  Jianjun Hu,et al.  The Hierarchical Fair Competition (HFC) Framework for Sustainable Evolutionary Algorithms , 2005, Evolutionary Computation.

[8]  Jordan B. Pollack,et al.  Embodied evolution: embodying an evolutionary algorithm in a population of robots , 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406).

[9]  Anthony Kulis,et al.  Bio-Inspired Artificial Intelligence: Theories, Methods, and Technologies , 2009, Scalable Comput. Pract. Exp..

[10]  A. E. Eiben,et al.  On-Line, On-Board Evolution of Robot Controllers , 2009, Artificial Evolution.

[11]  Anders Lyhne Christensen,et al.  odNEAT: An Algorithm for Distributed Online, Onboard Evolution of Robot Behaviours , 2012, ALIFE.

[12]  D. Floreano,et al.  Evolutionary Robotics: The Biology,Intelligence,and Technology , 2000 .

[13]  Kenneth O. Stanley,et al.  Abandoning Objectives: Evolution Through the Search for Novelty Alone , 2011, Evolutionary Computation.

[14]  José Neves,et al.  Preventing Premature Convergence to Local Optima in Genetic Algorithms via Random Offspring Generation , 1999, IEA/AIE.

[15]  Jean-Marc Montanier,et al.  Environment-Driven Embodied Evolution in a Population of Autonomous Agents , 2010, PPSN.

[16]  Jeff Heaton,et al.  Encog: library of interchangeable machine learning models for Java and C# , 2015, J. Mach. Learn. Res..

[17]  Shane Legg,et al.  Fitness uniform optimization , 2006, IEEE Transactions on Evolutionary Computation.

[18]  A. E. Eiben,et al.  MONEE: Using Parental Investment to Combine Open-Ended and Task-Driven Evolution , 2013, EvoApplications.

[19]  Hod Lipson,et al.  Age-fitness pareto optimization , 2010, GECCO '10.

[20]  Jordan B. Pollack,et al.  Embodied Evolution: Distributing an evolutionary algorithm in a population of robots , 2002, Robotics Auton. Syst..

[21]  Anders Lyhne Christensen,et al.  Avoiding convergence in cooperative coevolution with novelty search , 2014, AAMAS.

[22]  Kenneth O. Stanley,et al.  Efficiently evolving programs through the search for novelty , 2010, GECCO '10.

[23]  François Charpillet,et al.  Comparison of Selection Methods in On-line Distributed Evolutionary Robotics , 2014, ALIFE.