This paper describes a design study for a variant of the ‘Attention Filter Penetration’ (AFP) three layer architecture (Sloman, 2000). In an AFP architecture, the activities of an agent are distributed across three concurrently executing layers: a reactive layer in which detection of internal or external conditions immediately generates new internal or external response, a deliberative layer responsible for ‘what if’ reasoning and planning capabilities, and a meta-management layer which provides self monitoring, self evaluation and self-redirection including control of attention. We sketch a design for the reactive-deliberative interface based on two main assumptions: that deliberation (and meta-deliberation or management) is the technique of last resort for an agent, and is only used when no reactive behaviour is clearly applicable in a given situation; and deliberation is just something that some reactive systems do. In doing so, we attempt to identify those parts of the design which seem fairly uncontroversial, and to highlight those issues which have yet to be resolved. This paper describes a design study for a variant of the ‘Attention Filter Penetration’ (AFP) three layer architecture (Sloman, 2000). The aim is to design an architecture for an agent which can control its efforts to achieve one or more goals in some domain, and for the system to be able to adapt its behaviour to changes in the environment and difficulties encountered in achieving its goals. In an AFP architecture, the activities of the agent are distributed across three concurrently executing layers: a reactive layer in which detection of internal or external conditions immediately generates new internal or external response, a deliberative layer responsible for ‘what if’ reasoning and planning capabilities, and a metamanagement layer which provides self monitoring, self evaluation and self-redirection including control of attention. An attention filter with a dynamically varying interrupt threshold protects the resource-limited deliberative andmeta-management layers when dealing with tasks that are important, urgent and resource consuming. Wemake twomain assumptions: that deliberation (and meta-deliberation or management) is the technique of last resort for an agent, and is only used when no reactive behaviour is clearly applicable in a given situation; and deliberation (and management) is just something that some reactive systems do. In doing so, we are not attempting an explanatory reduction the deliberative or management layers to the reactive layer, rather the aim is to sketch an implementation of the virtual machines which operate at these layers in terms of the primitives available at the reactive layer. Some such reduction must be possible: some kinds of deliberative and management behaviour must ‘just happen’ otherwise we end up with infinite regress. However there is no reason in principle why they should reduce to mechanisms at the reactive layer, they could, for example, be implemented using distinct machinery ‘at’ their respective layers. The assumption that they are not is perhaps the central design decision of this paper. Of particular interest therefore is the interaction between the reactive and deliberative layers: both the generation of new motives or goals by the reactive layer and when and how these are scheduled for processing at the deliberative layer. We focus first on the reactive layer of the architecture to clarify what it can and can’t do, and to outline how it does what it does, before attempting to show how the deliberative and management layers can be implemented as particular kinds of reactive behaviour. The paper attempts to identify those parts of the design which seem fairly uncontroversial, and to highlight those issues which have yet to be resolved. In several cases, a number of possible approaches to an unresolved issue are identified. Such speculations are not intended as exhaustive enumerations of the options, rather they attempt to indicate the current state of work on a topic and illustrate insofar as this is possible at this stage, some of the main issues that would have to be addressed by any solution. 1 The Attention Filter Penetration
[1]
Luc Beaudoin.
Goal Processing in Autonomous Agents
,
1994
.
[2]
G. Humphreys,et al.
Explorations in Design Space
,
1994
.
[3]
J. Stainer,et al.
The Emotions
,
1922,
Nature.
[4]
Aaron Sloman,et al.
What Sort of Control System Is Able to Have a Personality?
,
1997,
Creating Personalities for Synthetic Actors.
[5]
Allen Newell,et al.
SOAR: An Architecture for General Intelligence
,
1987,
Artif. Intell..
[6]
Riccardo Poli,et al.
SIM_AGENT: A Toolkit for Exploring Agent Designs
,
1995,
ATAL.
[7]
Aaron Sloman,et al.
Damasio, Descartes, alarms and meta-management
,
1998,
SMC'98 Conference Proceedings. 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.98CH36218).
[8]
Michael E. Bratman,et al.
Intention, Plans, and Practical Reason
,
1991
.
[9]
Aaron Sloman,et al.
7. Architectural Requirements for Human-Like Agents Both Natural and Artificial: What sorts of machines can love?
,
2000
.
[10]
Aaron Sloman,et al.
Why Robots Will Have Emotions
,
1981,
IJCAI.
[11]
A. Sloman,et al.
A Study of Motive Processing and Attention
,
1993
.
[12]
Aaron Sloman,et al.
Building cognitively rich agents using the SIM_Agent toolkit
,
1999,
CACM.