Overloading Intentions for Efficient Practical Reasoning

Abstract : Agents, whether biological or artificial, have bounded reasoning capabilities. As a result, they cannot make reasoned decision instantaneously; reasoning takes time. Agents in dynamic environments face a potential difficulty when they must make decisions what to do. They run the risk the world may change in ways that undermine the very assumptions upon which their reasoning is proceeding. Dynamic environments and computational resource bounds thus pose a challenge that has led some researchers in Artificial Intelligence (AI) to propose that artificial agents be designed to avoid execution-time practical reasoning. In this paper, the author argues that there is a way in which an agent's plans can be used to constrain practical reasoning; they can suggest solutions to means-end reasoning problems that the agent subsequently encounters. Moreover, such solutions can often be accepted without further deliberation about possible alternatives. An agent will often be able to guide its search for a way to achieve some goal G by looking for an action A that it already intends that can also subserve G, or by looking for an intention that can be overloaded. If it is successful in this, it can typically avoid attempting to find alternative ways of achieving G; it need not weigh the solution involving A against competing options. The author argues such a strategy, fine-tuned in appropriate ways, is rational, despite the fact it may sometimes lead to suboptimal behavior.