Dynamic Knowledge Integration during Plan Execution

Artificial Intelligence LaboratoryThe University of Michigan1101 Beal Ave.Ann Arbor, MI 48109laird@umich.eduFAX: (313) 747-1761The goal of our work is to develop architectures forgeneral intelligent agents that can use large bodies ofknowledge to achieve a variety of goals in realistic envi-ronments. Our efforts to date have been realized in theSoar architecture. In this paper we provide an overviewof plan execution in Soar. Soar is distinguished by itsuse of learning to compile planning activity automati-cally into rules, which in turn control the selection ofoperators during interactions with the world. Soar’soperators can be simple, primitive actions, or they canbe hierarchically decomposed into complex activities.Thus, Soar provides for fast, but flexible plan execu-tion.Following our presentation of plan execution, we stepback and explicitly consider the properties of environ-ments and agents that were most influential in Soar’sdevelopment. From these properties, we derive a set ofrequired general agent capabilities, such as the abilityto encode large bodies of knowledge, use planning, cor-rect knowledge, etc. For each of these capabilities weanalyze how the architectural features of Soar supportit. This analysis can form the basis for comparing dif-ferent architectures, although in this paper we restrictourselves to an analysis of Soar (but see Wray et al.(1995) for one such comparison).Of the capabilities related to plan execution, onestands out as being at the nexus of both the environ-ment/agent properties and architecture design. Thisis the capability to integrate knowledge dynamicallyduring performance of a task. We assume that cen-tral to any agent is the need to select and execute itsnext action. To be intelligent, general, and flexible, anagent must use large bodies of knowledge from diversesources to make decisions and carry out actions. Themajor sources of knowledge include an agent’s currentsensing of the world, preprogrammed knowledge andgoals, the agent’s prior experience, instructions fromother agents, and the results of recent planning activ-ities (plans). However, many plan execution systemsbase their decisions solely on their plans, thus limit-ing their flexibility. One reason is that it is difficult tointegrate plan control knowledge dynamically and in-crementally with all of the other existing knowledge as92the agent is behaving in the world. What we hope toshow is that this is an important capability for respond-ing to realistic environments, and that Soar’s architec-tural components provide this capability in a generaland flexible way.Soar has been used for a variety of real and simulateddomains, including real and simulated mobile robotcontrol (Laird and Rosenbloom 1990), real and simu-lated robotic arm control (Laird et al. 1991), simulatedstick-level aircraft control (Pearson e~ al. 1993), simu-lated tactical aircraft combat (Tambe et al. 1995), anda variety of other simulated domains (Covrigaru 1992;Yager 1992). We will use a variety of examples fromthese domains to illustrate our points throughout thepaper.