Adjustable Autonomy: From Theory to Implementation

Recent exciting, ambitious applications in agent technology involve agents acting individually or in teams in support of critical activities of individual humans or entire human organizations. Applications range from intelligent homes [13], to "routine" organizational coordination[16], to electronic commerce[4] to long-term space missions[12, 6]. These new applications have brought forth an increasing interest in agents’ adjustable autonomy (AA), i.e., in agents’ dynamically adjusting their own level of autonomy based on the situation[8]. In fact, many of these applications will not be deployed, unless reliable AA reasoning is a central component. At the heart of AA is the question of whether and when agents should make autonomous decisions and when they should transfer decision-making control to other entities (e.g., human users). Unfortunately, previous work in adjustable autonomy has focused on individual agent-human interactions and tile techniques developed fail to scale-up to complex heterogeneous organizations. Indeed, as a first step, we focused on a smallscale, but real-world agent-human organization called Electric Elves, where an individual agent and human worked together within a larger multiagent context. Although’the application limits the interactions among entities, key weaknesses of previous approaches to adjustable autonomy are readily apparent. In particular, previous approaches to transferof-control are seen to be too rigid, employing one-shot transfersof-control that can result in unacceptable coordination failures. Furthermore, the previous approaches ignore potential costs (e.g., from delays) to an agent’s team due to such transfers of control. To remedy such problems, we propose a novel approach to AA, based on the notion of a transfer-of-control strategy. A transfer-of-control strategy consists of a conditional sequence of two types of actions: (i) actions to transfer decision-making control (e.g., from the agent to the user or vice versa) and (ii) actions to change an agent’s pre-specified coordination constraints with team members, aimed at minimizing miscoordination costs. The goal is for high quality individual decisions to be made with minimal disruption to the coordination of the team. We operationalize such strategies via Markov decision processes (MDPs) which select the optimal strategy given an uncertain environment and costs to individuals and teams. We have developed a general reward function and state representation for such an MDP, to facilitate application of the approach to different domains. We present results from a careful evaluation of this approach, including via its use in our real-world, deployed Electric Elves system.

[1]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[2]  Thomas B. Sheridan,et al.  Telerobotics, Automation, and Human Supervisory Control , 2003 .

[3]  David Kortenkamp,et al.  Adjustable Autonomy for Human-Centered Autonomous Systems on Mars , 1998 .

[4]  Jean Oh,et al.  Electric Elves: Applying Agent Technology to Support Human Organizations , 2001, IAAI.

[5]  Stuart J. Russell,et al.  Principles of Metareasoning , 1989, Artif. Intell..

[6]  K. Suzanne Barber,et al.  Dynamic adaptive autonomy in multi-agent systems , 2000, J. Exp. Theor. Artif. Intell..

[7]  Shlomo Zilberstein,et al.  Using Anytime Algorithms in Intelligent Systems , 1996, AI Mag..

[8]  Maria L. Gini,et al.  Mixed-initiative decision support in agent-based automated contracting , 2000, AGENTS '00.

[9]  L. Comfort Shared Risk: Complex Systems in Seismic Response , 1999 .

[10]  Tom M. Mitchell,et al.  Experience with a learning personal assistant , 1994, CACM.

[11]  Abhimanyu Das,et al.  Adaptive Agent Integration Architectures for Heterogeneous Team Members , 2000, ICMAS.

[12]  D. Kortenkamp,et al.  Adjustable control autonomy for manned space flight , 2000, 2000 IEEE Aerospace Conference. Proceedings (Cat. No.00TH8484).

[13]  Victor R. Lesser,et al.  The UMASS intelligent home project , 1999, AGENTS '99.

[14]  Henry Hexmoor Case Studies of Autonomy , 2000, FLAIRS Conference.

[15]  Krithi Ramamritham,et al.  Evaluation of a flexible task scheduling algorithm for distributed hard real-time systems , 1985, IEEE Transactions on Computers.

[16]  Erik Hollnagel,et al.  Human–machine function allocation: a functional modelling approach , 1999 .

[17]  Jean Oh,et al.  Electric Elves: Immersing an Agent Organization in a Human Organization , 2000 .

[18]  Eric Horvitz,et al.  Attention-Sensitive Alerting , 1999, UAI.

[19]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[20]  Milind Tambe,et al.  Towards Flexible Teamwork , 1997, J. Artif. Intell. Res..