This paper provides a logical analysis of the concept of intention as composed of two more basic concepts, choice (or goal) and commitment. By making explicit the conditions under which an agent can drop her goals, i.e., by specifying how the agent is conznzit-ted to her goals, the formalism provides analyses for Bratman's three characteristic functional roles played by intentions [Bratman, 19863, a.nd shows how agents can avoid intending all the foreseen side-effects of what they actually intend. Finally, the analysis shows how intentions can be adopted relative to a background of relevant beliefs and other intentions or goals. By relativizing one agent's intentions in terms of beliefs about another agent's intentions (or beliefs), we derive a preliminary account of interpersonal commitments. By now, it is obvious to all interested parties that autonomous agents need to infer the intentions of other agents-in order to help those agents, hinder them, communicate with them, and in general to predict their behavior. Although intent and plan recognition has become a major topic of research for computational linguistics and distributed artificial intelligence, little work has addressed what it is these intentions are. Earlier work equated intentions with plans [ 19861 has addressed the collection of mental states agents would have in having a plan. However, many properties of intention are left out, properties that an observer can make good use of. For example, knowing that an agent is intending to achieve something, and seeing it fail, an observer may conclude that the agent is likely to try again. This pa,per provides a formal foundation for making such predictions. I. Intention as a Composite We model intention as a composite concept specifying what the agent has chosen and how the agent is committed to that choice. First, consider agents as choosing from among their (possibly inconsistent) desires those they want most.4 Call what that follows from these chosen desires, loosely, goals. Next, consider an agent to have a persistent goal if she has a. goal that she believes currently to be false, and that remains chosen at least as long as certain conditions hold. Persistence involves an agent's internnl commitment over time to her choices. " In the simplest case, a " fanatic " will drop her commitment only if she believes the goal has been a.chieved or is impossible to achieve. Finally, intentior is modelled a.s a kind of persistent goal-a persistent goal to …
[1]
C. Raymond Perrault,et al.
Analyzing Intention in Dialogues
,
1978
.
[2]
Drew McDermott,et al.
A Temporal Logic for Reasoning About Processes and Plans
,
1982,
Cogn. Sci..
[3]
Michael E. Bratman,et al.
Two Faces of Intention
,
1984
.
[4]
Martha E. Pollack,et al.
Inferring domain plans in question-answering
,
1986
.
[5]
Michael E. Bratman,et al.
Intention, Plans, and Practical Reason
,
1991
.
[6]
C. Raymond Perrault,et al.
Elements of a Plan-Based Theory of Speech Acts
,
1979,
Cogn. Sci..
[7]
Philip R. Cohen,et al.
Persistence, Intention, and Commitment
,
2003
.
[8]
Hector J. Levesque,et al.
Rational interaction as the basis for communication
,
2003
.