Adaptivity in Agent-Based Systems via Interplay between Action Selection and Norm Selection

Beyond the ability for adaptation to an environment, Self-Adaptive Software (SAS) embodies the capacity and the initiative to adapt its own behavior. The adaptation is due to a need to maintain a desired or a reference relationship between a set of input and output signals. We may loosely divide SAS into adaptor and adapted components. The adapted component has an ongoing relationship with the environment. The adaptor detects and evaluates the need for change in the operation of the adapted component. In anthropomorphic terms, this detection and evaluation involves the cognitive processes of introspection and assimilation. However, an artifact may only need a supervisory control module. In a hierarchical system, the idea of adapted and adaptor can be extended to several levels with the higher level adapting the function of lower level. Hierarchical architectural systems are well studied. However, self-adaptive software is more than hierarchical control or application of adaptive techniques such as neural nets or genetic approaches.

[1]  Henry Hexmoor,et al.  Efficiency as a Motivation to Team , 2001, FLAIRS Conference.

[2]  Henry Hexmoor,et al.  Teams of agents , 2001, 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat.No.01CH37236).

[3]  Sviatoslav Brainov The Role and the Impact of Preferences on Multiagent Interaction , 1999, ATAL.

[4]  Moshe Tennenholtz,et al.  On Social Laws for Artificial Agent Societies: Off-Line Design , 1995, Artif. Intell..

[5]  Douglas Walton,et al.  Informal Logic: A Handbook for Critical Argumentation , 1989 .

[6]  Eric Werner,et al.  Cooperating Agents: A Unified Theory of Communication and Social Structure , 1989, Distributed Artificial Intelligence.

[7]  Rino Falcone,et al.  Trust and control: A dialectic link , 2000, Appl. Artif. Intell..

[8]  Henry Hexmoor,et al.  Towards Teams of Agents , 2001 .

[9]  Richard P. Cooper,et al.  Symbolic and continuous processes in the automatic selection of actions , 1995 .

[10]  D. Norman,et al.  Attention to Action: Willed and Automatic Control of Behavior Technical Report No. 8006. , 1980 .

[11]  F. Dignum,et al.  Deliberate Normative Agents: Principles and Architectures , 1999 .

[12]  Frank Dignum,et al.  Deliberative Normative Agents: Principles and Architecture , 1999, ATAL.

[13]  Sviatoslav B. Brainov,et al.  Altruistic Cooporation Between Self-Interested Agents , 1996, ECAI.

[14]  Andreas Birk,et al.  Boosting cooperation by evolving trust , 2000, Appl. Artif. Intell..

[15]  Michael Wooldridge,et al.  Reasoning about rational agents , 2000, Intelligent robots and autonomous agents.

[16]  Moshe Tennenholtz,et al.  Choosing social laws for multi-agent systems: Minimality and simplicity , 2000, Artif. Intell..

[17]  Tuomas Sandholm,et al.  Power, Dependence and Stability in Multiagent Plans , 1999, AAAI/IAAI.

[18]  Eduardo Alonso How individuals negotiate societies , 1998, Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160).

[19]  Jeffrey M. Bradshaw,et al.  What Is a Conversation Policy? , 2000, Issues in Agent Communication.

[20]  Henry Hexmoor,et al.  Shared Autonomy and Teaming: A preliminary report * , 2000 .

[21]  Rino Falcone,et al.  Introduction: Agents and Norms: How to fill the gap? , 1999, Artificial Intelligence and Law.