Robot soccer for the study of learning and coordination issues in multi-agent systems

The goal of the robotic soccer competition is to develop completely autonomous robotic agents that can cooperate to perform the desired tasks in an extremely dynamic environment. The approach is to train the robots, using various combinations of learning algorithms, i.e., the robots are to initially have no understanding of their environment. The robots are rewarded for positive contributions to the team, and in turn learn better performance through multiple sessions. The majority of learning will take place in simulation and then transferred to real physical mobile robots. The concept of tropism-based control architecture is introduced that not only allows for the evolution of cooperative strategies, but also obtains the acquired knowledge in a format that is easily comprehensible by humans. The results of many generations of simulated evolution are presented, accompanied by the game results and fitness characteristics.

[1]  Tamio Arai,et al.  Distributed Autonomous Robotic Systems 3 , 1998 .

[2]  Jong-Hwan Kim,et al.  A cooperative multi-agent system and its real time application to robot soccer , 1997, Proceedings of International Conference on Robotics and Automation.

[3]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[4]  Philip C. Anderson Sociobiology, the abridged edition, Edward O. Wilson. The Belknap Press of Harvard University Press, Cambridge, MA (1980), 336 pages. $9.95. , 1980 .

[5]  C. Michener,et al.  Evolution of Sociality in Insects , 1972, The Quarterly Review of Biology.

[6]  W. Walter The Living Brain , 1963 .

[7]  Mike Mesterton-Gibbons,et al.  Cooperation Among Unrelated Individuals: Evolutionary Factors , 1992, The Quarterly Review of Biology.

[8]  Toshio Fukuda,et al.  Coordinative behavior in evolutionary multi-agent-robot system , 1993, Proceedings of 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '93).

[9]  Fumihito Arai,et al.  Structure Configuration Using Genetic Algorithm For Cellular Robotic System , 1992, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.

[10]  Inman Harvey,et al.  Explorations in Evolutionary Robotics , 1993, Adapt. Behav..

[11]  R. Axelrod,et al.  The Further Evolution of Cooperation , 1988, Science.

[12]  M. Gadgil Evolution of social behavior through interpopulation selection. , 1975, Proceedings of the National Academy of Sciences of the United States of America.

[13]  Arvin Agah,et al.  Phylogenetic and Ontogenetic Learning in a Colony of Interacting Robots , 1997, Auton. Robots.

[14]  D. E. Matthews Evolution and the Theory of Games , 1977 .

[15]  Minoru Asada,et al.  Vision-based reinforcement learning for purposive behavior acquisition , 1995, Proceedings of 1995 IEEE International Conference on Robotics and Automation.

[16]  Kazuo Tanie,et al.  Robots playing to win: evolutionary soccer strategies , 1997, Proceedings of International Conference on Robotics and Automation.

[17]  Stewart W. Wilson The Genetic Algorithm and Simulated Evolution , 1987, ALIFE.

[18]  R. Trivers The Evolution of Reciprocal Altruism , 1971, The Quarterly Review of Biology.

[19]  D. E. Goldberg,et al.  Genetic Algorithms in Search , 1989 .

[20]  Toshio Fukuda,et al.  Self-organizing intelligence for cellular robotic system 'CEBOT' with genetic knowledge production algorithm , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.