Towards a motivation-based approach for evaluating goals

Traditional goal-oriented approaches to building intelligent agents only consider absolute satisfaction of goals. However, in continuous domains there may be many instances in which a goal state can only be partially satisfied. In these situations the traditional symbolic goal representation needs modifying in order that an agent can determine a worth value of a goal state and also of any state approximating the goal. In our work we use the concept of worth in two ways. First, we propose a mechanism by which the worth of a goal is dynamically set as a function of the intensity of an underlying motivation. Second, we determine the worth of any state in relation to a goal through the use of a metric by which we can measure the proximity of an environmental state to a goal. In this way, it is possible to make judgements about the relative satisfaction an environmental state offers in regard to a goal.

[1]  J. Michael Spivey,et al.  The Z notation - a reference manual , 1992, Prentice Hall International Series in Computer Science.

[2]  Victor R. Lesser,et al.  Design-to-Criteria Scheduling: Real-Time Agent Control , 2000, Agents Workshop on Infrastructure for Multi-Agent Systems.

[3]  Timo Steffens,et al.  Understanding Agent Systems , 2004, Künstliche Intell..

[4]  David Moffat,et al.  Where There's a Will There's an Agent , 1995, ECAI Workshop on Agent Theories, Architectures, and Languages.

[5]  Derek Long,et al.  Goal Creation in Motivated Agents , 1995, ECAI Workshop on Agent Theories, Architectures, and Languages.

[6]  Michael Luck,et al.  Towards Motivation-Based Decisions for Worth Goals , 2003, CEEMAS.

[7]  Michael Luck,et al.  Understanding Agent Systems , 2001, Springer Series on Agent Technology.

[8]  P. Petta,et al.  Towards a Tractable Appraisal-Based Architecture for Situated Cognizers , 1998 .

[9]  Emmet Spier,et al.  A Finer-Grained Motivational Model of Behaviour Sequencing , 1996 .