Learning and Communication in Multi-Agent Systems

This paper discusses the significance of communication between individual agents that are embedded into learning Multi-Agent Systems. For several learning tasks occurring within a Multi-Agent System, communication activities are investigated and the need for a mutual understanding of agents participating in the learning process is made explicit. Thus, the need for a common ontology to exchange learning-related information is shown. Building this ontology is an additional learning task that is not only extremely important, but also extremely difficult. We propose a. solution that is motivated by the human ability to understand each other even in the absence of a common language by using alternative communication channels, such as gestures. We show some results for the task of cooperative material handling by several manipulators.

[1]  James L. Crowley,et al.  Incremental supervised learning for mobile robot reactive control , 1997, Robotics Auton. Syst..

[2]  Tim Finin,et al.  KQML - A Language and Protocol for Knowledge and Information Exchange , 1994 .

[3]  Pat Langley,et al.  Models of Incremental Concept Formation , 1990, Artif. Intell..

[4]  Takeo Kanade,et al.  Modelling and Planning for Sensor Based Intelligent Robot Systems [Dagstuhl Workshop, October 24-28, 1994] , 1995, Modelling and Planning for Sensor Based Intelligent Robot Systems.

[5]  Tim Smithers,et al.  Symbol grounding via a hybrid architecture in an autonomous assembly system , 1990, Robotics Auton. Syst..

[6]  S. Kawasaki,et al.  Springer Verlag, Berlin, Heidelberg, New York (1995) , 1996 .

[7]  Sandip Sen,et al.  Adaption and Learning in Multi-Agent Systems , 1995, Lecture Notes in Computer Science.

[8]  R. Dillmann,et al.  Designing neural networks for adaptive control , 1995, Proceedings of 1995 34th IEEE Conference on Decision and Control.

[9]  M. R. Genesereth,et al.  Knowledge Interchange Format Version 3.0 Reference Manual , 1992, LICS 1992.

[10]  Randall D. Beer,et al.  A Dynamical Systems Perspective on Agent-Environment Interaction , 1995, Artif. Intell..

[11]  Rüdiger Dillmann,et al.  Building elementary robot skills from human demonstration , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[12]  Tim Lüth,et al.  A Distributed Control Architecture for Autonomous Robot Systems , 1994, Modelling and Planning for Sensor Based Intelligent Robot Systems.

[13]  Jfirgen Schmidhuber,et al.  A GENERAL METHOD FOR MULTI-AGENT REINFORCEMENT LEARNING IN UNRESTRICTED ENVIRONMENTS , 1996 .

[14]  Henry Lieberman,et al.  Watch what I do: programming by demonstration , 1993 .

[15]  Nathan Delson,et al.  Robot programming by human demonstration: the use of human inconsistency in improving 3D robot trajectories , 1994, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'94).

[16]  Maja J. Matarić,et al.  Learning to Cooperate using two Six-Legged Mobile Robots , 1995 .

[17]  Ales Ude,et al.  Trajectory generation from noisy positions of object features for teaching robot paths , 1993, Robotics Auton. Syst..

[18]  Maja J. Mataric,et al.  Issues and approaches in the design of collective autonomous agents , 1995, Robotics Auton. Syst..

[19]  Randall Davis,et al.  Frameworks for Cooperation in Distributed Problem Solving , 1988, IEEE Transactions on Systems, Man, and Cybernetics.

[20]  Long-Ji Lin,et al.  Reinforcement learning for robots using neural networks , 1992 .

[21]  Stevan Harnad,et al.  Symbol grounding problem , 1990, Scholarpedia.

[22]  Sheng Liu,et al.  Transfer of human skills to neural net robot controllers , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[23]  Dean Pomerleau,et al.  Efficient Training of Artificial Neural Networks for Autonomous Navigation , 1991, Neural Computation.