Reasoning about Intentions in Uncertain Domains

The design of autonomous agents that are situated in real world domains involves dealing with uncertainty in terms of dynamism, observability and non-determinism. These three types of uncertainty, when combined with the real-time requirements of many application domains, imply that an agent must be capable of effectively coordinating its reasoning. As such, situated belief-desire-intention (bdi) agents need an efficient intention reconsideration policy, which defines when computational resources are spent on reasoning, i.e., deliberating over intentions, and when resources are better spent on either object-level reasoning or action. This paper presents an implementation of such a policy by modelling intention reconsideration as a partially observable Markov decision process (pomdp). The motivation for a pomdp implementation of intention reconsideration is that the two processes have similar properties and functions, as we demonstrate in this paper. Our approach achieves better results than existing intention reconsideration frameworks, as is demonstrated empirically in this paper.