Incremental Coevolution With Competitive and Cooperative Tasks in a Multirobot Environment

Coevolution has been receiving increased attention as a method for simultaneously developing the control structures of multiple agents. Our ultimate goal is the mutual development of skills through coevolution. The coevolutionary process is, however, often prone to settle into suboptimal strategies. The key to successful coevolution has thus far been unclear. This paper discusses how several robots can emerge cooperative and competitive behavior through coevolutionary processes. In order to realize successful coevolution, we propose two ideas: multiple schedules for incremental evolution and fitness sharing based on the method of importance sampling. To examine this issue, we conducted a series of computer simulations. We have chosen a simplified soccer game consisting of two or three robots as a testbed for analyzing a problem in which both competitive and cooperative tasks are involved. We show that the proposed fitness evaluation allows robots to evolve robust behaviors in cooperative and competitive situations

[1]  S. Vijayakumar,et al.  Competitive-Cooperative-Concurrent Reinforcement Learning with Importance Sampling , 2004 .

[2]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[3]  Minoru Asada,et al.  Behavior generation for a mobile robot based on the adaptive fitness function , 2002, Robotics Auton. Syst..

[4]  Reuven Y. Rubinstein,et al.  Simulation and the Monte Carlo method , 1981, Wiley series in probability and mathematical statistics.

[5]  Jürgen Schmidhuber,et al.  Reinforcement Learning Soccer Teams with Incomplete World Models , 1999, Auton. Robots.

[6]  Toru Omata Learning with assistance based on evolutionary computation , 1998, Proceedings. 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146).

[7]  Victor Ciesielski,et al.  Layered learning for evolving goal scoring behaviour in soccer players , 2004, Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753).

[8]  Minoru Asada,et al.  Cognitive developmental robotics as a new paradigm for the design of humanoid robots , 2001, Robotics Auton. Syst..

[9]  Risto Miikkulainen,et al.  Evolving Soccer Keepaway Players Through Task Decomposition , 2005, Machine Learning.

[10]  Peter Stone,et al.  Reinforcement Learning for RoboCup Soccer Keepaway , 2005, Adapt. Behav..

[11]  Sanjoy Dasgupta,et al.  Off-Policy Temporal Difference Learning with Function Approximation , 2001, ICML.

[12]  Lincoln Smith,et al.  Evolving controllers for a homogeneous system of physical robots: structured cooperation with minimal sensors , 2003, Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences.

[13]  Takashi Gomi,et al.  Book Review: Evolutionary Robotics: the Biology, Intelligence, and Technology of Self-Organizing Machines , 2003, Genetic Programming and Evolvable Machines.

[14]  Stefano Nolfi,et al.  Co-evolving predator and prey robots , 1998, Artificial Life.

[15]  James A. Hendler,et al.  Co-evolving Soccer Softbot Team Coordination with Genetic Programming , 1997, RoboCup.

[16]  D. Floreano,et al.  Adaptive Behavior in Competing Co-Evolving Species , 2000 .

[17]  Pattie Maes,et al.  Co-evolution of Pursuit and Evasion II: Simulation Methods and Results , 1996 .

[18]  Peter Stone,et al.  Layered Learning in Multiagent Systems , 1997, AAAI/IAAI.

[19]  Richard S. Sutton,et al.  Reinforcement Learning , 1992, Handbook of Machine Learning.

[20]  D. E. Goldberg,et al.  Genetic Algorithms in Search , 1989 .

[21]  Victor Ciesielski,et al.  Genetic Programming for Robot Soccer , 2001, RoboCup.

[22]  Hiroaki Kitano,et al.  RoboCup: Today and Tomorrow - What we have learned , 1999, Artif. Intell..

[23]  Stefano Nolfi,et al.  Evolving Mobile Robots Able to Display Collective Behaviors , 2003, Artificial Life.

[24]  Steven M. Gustafson,et al.  Genetic Programming And Multi-agent Layered Learning By Reinforcements , 2002, GECCO.

[25]  Minoru Asada,et al.  Purposive behavior acquisition for a real robot by vision-based reinforcement learning , 1995, Machine Learning.

[26]  Joanne H. Walker,et al.  Evolving Controllers for Real Robots: A Survey of the Literature , 2003, Adapt. Behav..

[27]  Minoru Asada,et al.  Cooperative and competitive behavior acquisition for mobile robots through co-evolution , 1999 .

[28]  Richard K. Belew,et al.  New Methods for Competitive Coevolution , 1997, Evolutionary Computation.

[29]  B. Mirtich,et al.  Diffuse versus true coevolution in a physics-based world , 1999 .

[30]  Andrew G. Barto,et al.  Reinforcement learning , 1998 .

[31]  Minoru Asada,et al.  Cooperative Behavior Acquisition for Mobile Robots in Dynamically Changing Real Worlds Via Vision-Based Reinforcement Learning and Development , 1999, Artif. Intell..

[32]  K. Doya,et al.  Competitive-Cooperative-Concurrent Reinforcement Learning with Importance Sampling , 2004 .

[33]  Matthew Quinn,et al.  Evolving Communication without Dedicated Communication Channels , 2001, ECAL.

[34]  John R. Koza,et al.  Genetic programming - on the programming of computers by means of natural selection , 1993, Complex adaptive systems.

[35]  Stefano Nolfi,et al.  Evolutionary Robotics: The Biology, Intelligence, and Technology of Self-Organizing Machines , 2000 .

[36]  Risto Miikkulainen,et al.  Competitive Coevolution through Evolutionary Complexification , 2011, J. Artif. Intell. Res..