Emergence of multiagent spatial coordination strategies through artificial coevolution

Abstract This paper describes research investigating the evolution of coordination strategies in robot soccer teams. Each player (viewed as an agent) is provided with a common set of skills and is assigned to perform over a delimited area inside a soccer field. The idea is to optimize the whole team behavior by means of a spatial coadaptation process in which new players are selected in such a way to comply with the already existing ones. The main results show that, through coevolution, we progressively create teams whose members act on complementary areas of the playing field, being capable of prevailing over a standard opponent team with a fixed formation.

[1]  James A. Hendler,et al.  Co-evolving Soccer Softbot Team Coordination with Genetic Programming , 1997, RoboCup.

[2]  Phil Husbands,et al.  Simulated Co-Evolution as the Mechanism for Emergent Planning and Scheduling , 1991, ICGA.

[3]  Myeong-Wuk Jang,et al.  Cooperation in Multi-agent Systems , 1995 .

[4]  Leonid Sheremetov,et al.  Weiss, Gerhard. Multiagent Systems a Modern Approach to Distributed Artificial Intelligence , 2009 .

[5]  Phil Husbands,et al.  Distributed Coevolutionary Genetic Algorithms for Multi-Criteria and Multi-Constraint Optimisation , 1994, Evolutionary Computing, AISB Workshop.

[6]  Manuela M. Veloso,et al.  The CMUnited-98 Champion Simulator Team , 1998, RoboCup.

[7]  Sandip Sen,et al.  Shared memory based cooperative coevolution , 1998, 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360).

[8]  Julie A. Adams,et al.  Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence , 2001, AI Mag..

[9]  Kazuo Tanie,et al.  Emergent Cooperative Strategies for Robot Team Sports , 2000, Intell. Autom. Soft Comput..

[10]  Randolph M. Jones,et al.  Knowledge-based Multiagent Coordination , 1998, Presence.

[11]  Nicholas R. Jennings,et al.  On cooperation in multi-agent systems , 1997, The Knowledge Engineering Review.

[12]  Manuela M. Veloso,et al.  Task Decomposition, Dynamic Role Assignment, and Low-Bandwidth Communication for Real-Time Strategic Teamwork , 1999, Artif. Intell..

[13]  Hitoshi Matsubara,et al.  Learning of Cooperative actions in multi-agent systems: a case study of pass play in Soccer , 2002 .

[14]  浅田 稔,et al.  RoboCup-98 : Robot Soccer World Cup II , 1999 .

[15]  Manuela M. Veloso,et al.  The CMUnited-99 Champion Simulator Team , 2000, AI Mag..

[16]  Harukazu Igarashi,et al.  Robo Cup-98: Robot Soccer World Cup II , 1999 .

[17]  Jan Paredis,et al.  Coevolutionary Computation , 1995, Artificial Life.

[18]  Kenneth A. De Jong,et al.  Cooperative Coevolution: An Architecture for Evolving Coadapted Subcomponents , 2000, Evolutionary Computation.

[19]  Hiroaki Kitano,et al.  RoboCup-99: Robot Soccer World Cup III , 2003, Lecture Notes in Computer Science.

[20]  Tucker Balch,et al.  Learning Roles: Behavioral Diversity in Robot Teams , 1997 .

[21]  Tomohito Andou,et al.  Refinement of Soccer Agents' Positions Using Reinforcement Learning , 1997, RoboCup.

[22]  Thomas Bäck,et al.  Evolutionary computation: comments on the history and current state , 1997, IEEE Trans. Evol. Comput..

[23]  Samir W. Mahfoud Niching methods for genetic algorithms , 1996 .

[24]  John J. Grefenstette,et al.  A Coevolutionary Approach to Learning Sequential Decision Rules , 1995, ICGA.

[25]  Nicholas R. Jennings,et al.  Coordination techniques for distributed artificial intelligence , 1996 .

[26]  Michael K. Sahota,et al.  Real-time intelligent behaviour in dynamic environments : soccer-playing robots , 1993 .