Towards a Theory of Flexible Holons: Modelling Institutions for Making Multi-Agent Systems Robust

Multi-agent systems have a reputation for bringing with them the property of robustness. However, multi-agent systems need to be specifically designed to display this property and until now it is unclear how this can be achieved. One way of approaching this, is to try to simulate social systems, as social order is in close analogy to robustness. We discuss this analogy, give an attempt to a definition of robustness, and a detailed analysis of delegation in multi-agent systems that we believe to help to achieve robustness. Delegation is an integral part of MAS and can be a source of flexibility. We discuss delegation on the spectrum between the absence of norms to the complete specification by norms. We argue that the concept of flexible holons, i.e. flexible grouping of agents due to task and social delegation, is a cornerstone to understanding the formation of institutions in multi-agent systems and to exploiting their potential contribution to robustness.

[1]  Sandip Sen,et al.  Adaption and Learning in Multi-Agent Systems , 1995, Lecture Notes in Computer Science.

[2]  Gerhard Weiß,et al.  Adaptation and Learning in Multi-Agent Systems: Some Remarks and a Bibliography , 1995, Adaption and Learning in Multi-Agent Systems.

[3]  Nicholas R. Jennings,et al.  Foundations of distributed artificial intelligence , 1996, Sixth-generation computer technology series.

[4]  Nicholas R. Jennings,et al.  A methodology for agent-oriented analysis and design , 1999, AGENTS '99.

[5]  Reid G. Smith,et al.  The Contract Net Protocol: High-Level Communication and Control in a Distributed Problem Solver , 1980, IEEE Transactions on Computers.

[6]  Milind Tambe,et al.  What Is Wrong With Us? Improving Robustness Through Social Diagnosis , 1998, AAAI/IAAI.

[7]  R. Keith Sawyer,et al.  Artificial Societies , 2003 .

[8]  Duncan J. Watts,et al.  Collective dynamics of ‘small-world’ networks , 1998, Nature.

[9]  Peter Tyson,et al.  Artificial societies , 1997 .

[10]  Cristiano Castelfranchi,et al.  Engineering Social Order , 2000, ESAW.

[11]  Guillermo Ricardo Simari,et al.  Multiagent systems: a modern approach to distributed artificial intelligence , 2000 .

[12]  Jeffrey S. Rosenschein and Gilad Zlotkin Rules of Encounter , 1994 .

[13]  Klaus Fischer,et al.  The Micro-Macro Link in DAI and Sociology , 2000, MABS.

[14]  Martin Suter,et al.  Small World , 2002 .

[15]  Carl Hewitt,et al.  Open Information Systems Semantics for Distributed Artificial Intelligence , 1991, Artif. Intell..

[16]  Jeffrey Alexander,et al.  The Micro-macro link , 1987 .

[17]  Rosaria Conte,et al.  Social Intelligence Among Autonomous Agents , 1999, Comput. Math. Organ. Theory.

[18]  Matthew L. Ginsberg,et al.  Supermodels and Robustness , 1998, AAAI/IAAI.

[19]  P. Bourdieu,et al.  实践与反思 : 反思社会学导引 = An invitation to reflexive sociology , 1994 .

[20]  Mark Klein,et al.  Exception handling in agent systems , 1999, AGENTS '99.

[21]  Rino Falcone,et al.  Towards a theory of delegation for agent-based systems , 1998, Robotics Auton. Syst..

[22]  Stuart J. Russell Rationality and Intelligence , 1995, IJCAI.

[23]  Cristiano Castelfranchi,et al.  Limits of economic and strategic rationality for agents and MA systems , 1998, Robotics Auton. Syst..

[24]  Christian Gerber Flexible Autonomy in Holonic Agent Systems , 2002 .

[25]  Mark Klein,et al.  TOWARDS A SYSTEMATIC REPOSITORY OF KNOWLEDGE ABOUT MANAGING MULTI-AGENT SYSTEM EXCEPTIONS1 , 2000 .

[26]  Milind Tambe,et al.  The Role of Agent-Modeling in Agent Robustness , 1998 .

[27]  Alan H. Bond,et al.  Readings in Distributed Artificial Intelligence , 1988 .

[28]  Michael Rovatsos,et al.  Using trust for detecting deceitful agents in artificial societies , 2000, Appl. Artif. Intell..

[29]  Cristiano Castelfranchi,et al.  Founding Agents' "Autonomy" on Dependence Theory , 2000, ECAI.