Beyond Asimov : The Three Laws of Responsible Robotics

that the robot had complex perception and reasoning skills equivalent to a child and that robots were subservient to humans. Although the laws were simple and few, the stories attempted to demonstrate just how diffi cult they were to apply in various real-world situations. In most situations, although the robots usually behaved “logically,” they often failed to do the “right” thing, typically because the particular context of application required subtle adjustments of judgment on the part of the robot (for example, determining which law took priority in a given situation, or what constituted helpful or harmful behavior). The three laws have been so successfully inculcated into the public consciousness through entertainment that they now appear to shape society’s expectations about how robots should act around humans. For instance, the media frequently refer to human–robot interaction in terms of the three laws. They’ve been the subject of serious blogs, events, and even scientifi c publications. The Singularity Institute organized an event and Web site, “Three Laws Unsafe,” to try to counter public expectations of robots in the wake of the movie I, Robot. Both the philosophy1 and AI2 communities have discussed ethical considerations of robots in society using the three laws as a reference, with a recent discussion in IEEE Intelligent Systems.3 Even medical doctors have considered robotic surgery in the context of the three laws.4 With few notable exceptions,5,6 there has been relatively little discussion of whether robots, now or in the near future, will have suffi cient perceptual and reasoning capabilities to actually follow the laws. And there appears to be even less serious discussion as to whether the laws are actually viable as a framework for human–robot interaction, outside of cultural expectations. Following the defi nitions in Moral Machines: Teaching Robots Right from Wrong,7 Asimov’s laws are based on functional morality, which assumes that robots have suffi cient agency and cognition to make moral decisions. Unlike many of his successors, Asimov is less concerned with the details of robot design than in exploiting a clever literary device that lets him take advantage of the large gaps between aspiration and reality in robot autonomy. He uses the situations as a foil to explore issues such as