Synthetic agents with varying degrees of intelligence and autonomy are being designed in many research laboratories. The motivations include military training simulations, games and entertainments, educational software, digital personal assistants, software agents managing Internet transactions or purely scientific curiosity. Different approaches are being explored, including, at one extreme, research on the interactions between agents, and at the other extreme research on processes within agents. The first approach focuses on forms of communication, requirements for consistent collaboration, planning of coordinated behaviours to achieve collaborative goals, extensions to logics of action and belief for multiple agents, and types of emergent phenomena when many agents interact, for instance taking routing decisions on a telecommunications network. The second approach focuses on the internal architecture of individual agents required for social interaction, collaborative behaviours, complex decision making, learning, and emergent phenomena within complex agents. Agents with complex internal structure may, for example, combine perception, motive generation, planning, plan execution, execution monitoring, and even emotional reactions. We expect the second approach to become increasingly important for large multi-agent systems deployed in networked environments, as the level of intelligence required of individual agents increases. This is particularly relevant to work on agents which must cooperate to perform tasks requiring planning, problem solving, learning, opportunistic redirection of plans, and fine judgement, in a partially unpredictable environment. In such contexts, important new information about something other than the current goal can arrive at unexpected times or be found in unexpected contexts, and there is often insufficient time for deliberation. This requires reactive mechanisms. However some tasks involve achieving new types of goals or acting in novel contexts, which may require deliberative mechanisms. Dealing with conflicting goals, or adapting to changing opportunities and cultures may require sophisticated motivational mechanisms. Motivations for such research include: an interest in modelling human mental functioning (e.g., emotions), a desire for more interesting synthetic agents (‘believable agents’) in games and computer entertainments, and the need for intelligent agents capable of performing more complex tasks than hitherto.
[1]
D. Goleman.
Emotional Intelligence: Why It Can Matter More Than IQ
,
1995
.
[2]
A. Sloman,et al.
A Study of Motive Processing and Attention
,
1993
.
[3]
Riccardo Poli,et al.
SIM_AGENT: A Toolkit for Exploring Agent Designs
,
1995,
ATAL.
[4]
E. Vesterinen,et al.
Affective Computing
,
2009,
Encyclopedia of Biometrics.
[5]
Randall W. Hill,et al.
Intelligent Agents for the Synthetic Battlefield: A Company of Rotary Wing Aircraft
,
1997,
AAAI/IAAI.
[6]
A. Ceranowicz,et al.
ModSAF Development Status
,
1995
.
[7]
Allen Newell,et al.
SOAR: An Architecture for General Intelligence
,
1987,
Artif. Intell..
[8]
Aaron Sloman,et al.
MINDER1: An implementation of a protoemotional agent architecture
,
1997
.
[9]
A. Slomang.
A Hybrid Rule-based System with Rule-reenement Mechanisms
,
1995
.
[10]
Aaron Sloman,et al.
Why Robots Will Have Emotions
,
1981,
IJCAI.
[11]
Darryl N. Davis,et al.
Reactive and Motivational Agents: Towards a Collective Minder
,
1997,
ATAL.
[12]
Brian Logan,et al.
A* with Bounded Costs
,
1998,
AAAI/IAAI.
[13]
Aaron Sloman,et al.
What Sort of Control System Is Able to Have a Personality?
,
1997,
Creating Personalities for Synthetic Actors.