Asimovian Multiagents: Applying Laws of Robotics to Teams of Humans and Agents

In the March 1942 issue of "Astounding Science Fiction", Isaac Asimov for the first time enumerated his three laws of robotics. Decades later, researchers in agents and multiagent systems have begun to examine these laws for providing a useful set of guarantees on deployed agent systems. Motivated by unexpected failures or behavior degradations in complex mixed agent-human teams, this paper for the first time focuses on applying Asimov's first two laws to provide behavioral guarantees in such teams. However, operationalizing these laws in the context of such mixed agent-human teams raises three novel issues. First, while the laws were originally written for interaction of an individual robot and an individual human, clearly, our systems must operate in a team context. Second, key notions in these laws (e.g. causing "harm" to humans) are specified in very abstract terms and must be specified in concrete terms in implemented systems. Third, since removed from science-fiction, agents or humans may not have perfect information about the world, they must act based on these laws despite uncertainty of information. Addressing this uncertainty is a key thrust of this paper, and we illustrate that agents must detect and overcome such states of uncertainty while ensuring adherence to Asimov's laws. We illustrate the results of two different domains that each have different approaches to operationalizing Asimov's laws.

[1]  Diana F. Spears Asimovian Adaptive Agents , 2000, J. Artif. Intell. Res..

[2]  Daniel Kudenko,et al.  Adaptive Agents and Multi-Agent Systems , 2003, Lecture Notes in Computer Science.

[3]  Milind Tambe,et al.  A prototype infrastructure for distributed robot-agent-person teams , 2003, AAMAS '03.

[4]  Chung Hee Hwang,et al.  The TRAINS project: a case study in building a conversational planning agent , 1994, J. Exp. Theor. Artif. Intell..

[5]  Michael A. Goodrich,et al.  Towards predicting robot team performance , 2003, SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483).

[6]  John P. Lewis,et al.  The DEFACTO System: Training Tool for Incident Commanders , 2005, AAAI.

[7]  Milind Tambe,et al.  Exploiting belief bounds: practical POMDPs for personal assistant agents , 2005, AAMAS '05.

[8]  Karen L. Myers Advisable Planning Systems , 1996 .

[9]  Milind Tambe,et al.  Revisiting Asimov's First Law: A Response to the Call to Arms , 2001, ATAL.

[10]  Paul Scerri,et al.  Impact of Human Advice on Agent Teams : A Preliminary Report , 2003 .

[11]  Milind Tambe,et al.  Intelligent Agents VIII , 2002, Lecture Notes in Computer Science.

[12]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[13]  Oren Etzioni,et al.  The First Law of Robotics (A Call to Arms) , 1994, AAAI.

[14]  Milind Tambe,et al.  Towards Adjustable Autonomy for the Real World , 2002, J. Artif. Intell. Res..