In their everyday lives people flexibly handle manytasks. They work on one at a time, make quick de-cisions about which one to work on, and put asidetasks whenever attending to them is not required toachieve the task’s goal. This last capability is crit-ical because rather than fixate on a blocked task,a person can work on some other task. For exam-ple, a person making bean soup wouldn’t watch thebeans as they soak overnight. Instead she wouldtake the inability to affect the progress of makingbean soup as an opportunity to work on some other,possibly less important task where progress is possi-ble. When taking advantage of these opportunities,people don’t completely forget what they were do-ing. Instead, the put-aside tasks guide the selectionof the new tasks. After the beans have soaked andthe soup is simmering, a person might go out to getother chores done, but she probably would not for-get the soup and eat out. By putting aside a blockedtask and remembering it at the appropriate time, aperson can complete many other tasks and still ac-complish the blocked one in a timely manner.This paper describes how an artificial agent, calledLaureli, can maintain goal-directed behavior whilesuspending her blocked tasks similarly to that de-scribed above. Laureli serves as an exemplar agent,grounding in detailed and understandable exam-ples the problems that occur when she suspendsher tasks for later reactivation. In creating Lau-tell, this paper extends work [Hammond, 1989;Pryor, 1994] done to recognize when a task is blockedso that it can be put aside until progress is possi-ble again. Laureli’s suspension-reactivation mecha-nisms provide for interleaving more available tasks"This research was supported in part by a grant fromMartin Marietta and in part by the Wright Laboratory,Aeronautical Systems Center, Air Force Materiel Com-mand, USAF, and ARPA under grant F33615-93-1-1330.The views and conclusions contained in this documentare those of the authors and should not be interpretedas necessarily representing the official policies or endorse-ments, either express or implied, of Martin Marietta orthe United States Government.during a task’s slack time (while the task is blocked).A task’s, availability is defined as a metric of howlikely the agent expects working on a task will makeprogress toward the task’s goal. A task’s availabilitychanges over time, and depends on both the agent’sactions and input from the environment. Laureli’ssuspension-reactivation mechanisms are her methodfor representing large changes in a task’s availabilityover time.Representing a task’s availability to the agent isimportant, because the agent can then better sched-ule its tasks as it executes them. If the task’s avail-ability over time is known in advance, then the agentcan use that knowledge and generate a schedule thatcan be simply followed at execution time. However,in many cases the agent either doesn’t know or itis difficult to know how the task’s availability willchange over time. The second step in making beansoup, "boiling the beans for an hour", is such anexample. Perhaps Laureli could measure the water,look up the specific heat of the beans and then usingthe effective heat transfer from the stove, calculatehow long until boiling. However, she could also justput the pan of beans and water on the stove, stay inthe area, and occasionally check to see if the waterwas boiling. This paper advocates this second ap-proach, so the agent can monitor a task’s executionrather than attempt toschedule the task, when theagent doesn’t have all the availability knowledge.Laureli’s suspending and reactivating of her tasksis similar to deliberately excluding the suspendedtasks from her decisions about what action to take.However, some decisions based on the smaller num-ber of tasks will be functionally different from simi-lar decisions that consider all the tasks. Since thesedecisions affect the agent’s external behavior, sus-pending tasks can affect the agent’s apparent ratio-nality in achieving its goals. Birnbaum [Birnbaum,1986] describes many of the issues in maintainingan agent’s rationality when suspending tasks. Us-ing some different assumptions from Birnbaum, Lau-reli can often act similarly to an agent with ac-cess to all its tasks, because she can access all the117
[1]
John E. Laird,et al.
Integrating, Execution, Planning, and Learning in Soar for External Environments
,
1990,
AAAI.
[2]
Tom Michael Mitchell,et al.
Explanation-based generalization: A unifying view
,
1986
.
[3]
Louise Pryor.
Opportunities and planning in an unpredictable world
,
1994
.
[4]
Stuart J. Russell.
Rationality and Intelligence
,
1995,
IJCAI.
[5]
Robert B. Doorenbos.
Combining Left and Right Unlinking for Matching a Large Number of Learned Rules
,
1994,
AAAI.
[6]
Daniel R. Kuokka,et al.
The deliberative integration of planning, execution, and learning
,
1990
.
[7]
Kristian J. Hammond,et al.
Opportunistic memory
,
1989,
IJCAI 1989.
[8]
R. Wilensky.
Planning and Understanding: A Computational Approach to Human Reasoning
,
1983
.
[9]
John R. Anderson,et al.
Rules of the Mind
,
1993
.
[10]
Lawrence Birnbaum,et al.
Integrated processing in planning and understanding
,
1986
.
[11]
Robert James Firby,et al.
Adaptive execution in complex dynamic worlds
,
1989
.
[12]
David J. Israel,et al.
Plans and resource‐bounded practical reasoning
,
1988,
Comput. Intell..