Reasoning with the Outcomes of Plan Execution in Intentional Agents

Intentional agents must be aware of their success and failure to truly assess their own progress towards their intended goals. However, our analysis of intentional agent systems indicate that existing architectures are inadequate in this regard. Specifically, existing systems provide few, if any, mechanisms for monitoring for the failure of behavior. This inability to detect failure means that agents retain an unrealistically optimistic view of the success of their behaviors and the state of their environment. In this paper we extend the solution proposed in [1] in three ways. Firstly, we extend the formulation to handle cases in which an agent has conflicting evidence regarding the causation of the effects of a plan or action. We do this by identifying a number of policies that an agent may use in order to alleviate these conflicts. Secondly, we provide mechanisms by which the agent can utilize its failure handling routines to recover when failure is detected. Lastly, we lift the requirement that all the effects be realized simultaneously and allow for progressive satisfaction of effects. Like the original solution these extensions can be applied to existing BDI systems.