Rational interaction: cooperation among intelligent agents

The development of intelligent agents presents opportunities to exploit intelligent cooperation. Before this can occur, however, a framework must be built for reasoning about interactions. This dissertation describes such a framework, and explores strategies of interaction among intelligent agents. The formalism that has been developed removes some serious restrictions that underlie previous research in distributed artificial intelligence, particularly the assumption that the interacting agents have identical or non-conflicting goals. The formalism allows each agent to make various assumptions about both the goals and the rationality of other agents. A hierarchy of rationality assumptions is presented, along with an analysis of the consequences that result when an agent believes a particular level in the hierarchy describes other agents' rationality. In addition, the formalism presented allows the modeling of restrictions on communication and the modeling of binding promises among agents. Computation on the part of each individual agent can often obviate the need for inter-agent communication. However, when communication and promises are allowed, fewer assumptions need be made about the rationality of other agents when choosing one's own rational course of action.