The design of autonomous agents that are situated in real world domains involves dealing with uncertainty in terms of dynamism, observability and non-determinism. These three types of uncertainty, when combined with the real-time requirements of many application domains, imply that an agent must be capable of effectively coordinating its reasoning. As such, situated belief-desire-intention (bdi) agents need an efficient intention reconsideration policy, which defines when computational resources are spent on reasoning, i.e., deliberating over intentions, and when resources are better spent on either object-level reasoning or action. This paper presents an implementation of such a policy by modelling intention reconsideration as a partially observable Markov decision process (pomdp). The motivation for a pomdp implementation of intention reconsideration is that the two processes have similar properties and functions, as we demonstrate in this paper. Our approach achieves better results than existing intention reconsideration frameworks, as is demonstrated empirically in this paper.
[1]
Anand S. Rao,et al.
An Abstract Architecture for Rational Agents
,
1992,
KR.
[2]
Stuart J. Russell,et al.
Principles of Metareasoning
,
1989,
Artif. Intell..
[3]
David J. Israel,et al.
Plans and resource‐bounded practical reasoning
,
1988,
Comput. Intell..
[4]
Craig Boutilier,et al.
Decision-Theoretic Planning: Structural Assumptions and Computational Leverage
,
1999,
J. Artif. Intell. Res..
[5]
Michael Wooldridge,et al.
Principles of intention reconsideration
,
2001,
AGENTS '01.
[6]
Michael Wooldridge,et al.
Intention reconsideration in complex environments
,
2000,
AGENTS '00.
[7]
Martha E. Pollack,et al.
Introducing the Tileworld: Experimentally Evaluating Agent Architectures
,
1990,
AAAI.
[8]
Michael P. Georgeff,et al.
Commitment and Effectiveness of Situated Agents
,
1991,
IJCAI.
[9]
Michael Wooldridge,et al.
Intention Reconsideration Reconsidered
,
1998,
ATAL.