Adjustable autonomy in real-world multi-agent environments

Through {\em adjustable autonomy} (AA), an agent can dynamically vary the degree to which it acts autonomously, allowing it to exploit human abilities to improve its performance, but without becoming overly dependent and intrusive in its human interaction. AA research is critical for successful deployment of multi-agent systems in support of important human activities. While most previous AA work has focused on individual agent-human interactions, this paper focuses on {\em teams} of agents operating in real-world human organizations. The need for agent teamwork and coordination in such environments introduces novel AA challenges. First, agents must be more judicious in asking for human intervention, because, although human input can prevent erroneous actions that have high team costs, one agent's inaction while waiting for a human response can lead to potential miscoordination with the other agents in the team. Second, despite appropriate local decisions by individual agents, the overall team of agents can potentially make global decisions that are unacceptable to the human team. Third, the diversity in real-world human organizations requires that agents gradually learn individualized models of the human members, while still making reasonable decisions even before sufficient data are available. We address these challenges using a multi-agent AA framework based on an adaptive model of users (and teams) that reasons about the uncertainty, costs, and constraints of decisions at {\em all} levels of the team hierarchy, from the individual users to the overall human organization. We have implemented this framework through Markov decision processes, which are well suited to reason about the costs and uncertainty of individual and team actions. Our approach to AA has proven essential to the success of our deployed multi-agent Electric Elves system that assists our research group in rescheduling meetings, choosing presenters, tracking people's locations, and ordering meals.

[1]  James F. Allen,et al.  TRAINS-95: Towards a Mixed-Initiative Planning Assistant , 1996, AIPS.

[2]  Jean Oh,et al.  Electric Elves: Immersing an Agent Organization in a Human Organization , 2000 .

[3]  Barbara Hayes-Roth,et al.  Multiagent Collaboration in Directed Improvisation , 1997, ICMAS.

[4]  David Kortenkamp,et al.  Adjustable Autonomy for Human-Centered Autonomous Systems on Mars , 1998 .

[5]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[6]  Tom M. Mitchell,et al.  Experience with a learning personal assistant , 1994, CACM.

[7]  Eric Horvitz,et al.  Attention-Sensitive Alerting , 1999, UAI.

[8]  Worthy N. Martin,et al.  Effects of Uncertainty on Variable Autonomy in Maintenance Robots , 1999 .

[9]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[10]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[11]  Milind Tambe,et al.  Towards Flexible Teamwork , 1997, J. Artif. Intell. Res..

[12]  Debra Schreckenghost Human Interaction with Control Software Supporting Adjustable Autonomy , 1999 .

[13]  Alberto Maria Segre,et al.  Programs for Machine Learning , 1994 .

[14]  Maria L. Gini,et al.  Mixed-initiative decision support in agent-based automated contracting , 2000, AGENTS '00.

[15]  Milind Tambe,et al.  Building Dynamic Agent Organizations in Cyberspace , 2000, IEEE Internet Comput..

[16]  Munindar P. Singh,et al.  Readings in agents , 1997 .

[17]  Abhimanyu Das,et al.  Adaptive Agent Integration Architectures for Heterogeneous Team Members , 2000, ICMAS.

[18]  Bryan Horling,et al.  A multi-agent system for intelligent environment control , 1998 .

[19]  References , 1971 .

[20]  金田 重郎,et al.  C4.5: Programs for Machine Learning (書評) , 1995 .

[21]  Scott A. DeLoach,et al.  Design Issues for Mixed-Initiative Agent Systems , 1999 .