Moral judgments of human vs. robot agents

Robots will eventually perform norm-regulated roles in society (e.g. caregiving), but how will people apply moral norms and judgments to robots? By answering such questions, researchers can inform engineering decisions while also probing the scope of moral cognition. In previous work, we compared people's moral judgments about human and robot agents' behavior in moral dilemmas. We found that robots, compared with humans, were more commonly expected to sacrifice one person for the good of many, and they were blamed more than humans when they refrained from that decision. Thus, people seem to have somewhat different normative expectations of robots than of humans. In the current project we analyzed in detail the justifications people provide for three types of moral judgments (permissibility, wrongness, and blame) of robot and human agents. We found that people's moral judgments of both agents relied on the same conceptual and justificatory foundation: consequences and prohibitions undergirded wrongness judgments; attributions of mental agency undergirded blame judgments. For researchers, this means that people extend moral cognition to nonhuman agents. For designers, this means that robots with credible cognitive capacities will be considered moral agents but perhaps regulated by different moral norms.

[1]  Bertram F. Malle,et al.  A Strawsonian look at desert , 2013 .

[2]  John Voiklis,et al.  Moral Cognition and Its Basis in Social Cognition and Social Regulation , 2016 .

[3]  Jennifer S. Beer,et al.  Deciding versus Reacting: Conceptions of Moral Judgment and the Reason-Affect Debate , 2007 .

[4]  Takayuki Kanda,et al.  Do people hold a humanoid robot morally accountable for the harm it causes? , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[5]  K. A. Ericsson,et al.  Verbal reports as data. , 1980 .

[6]  L. Kohlberg,et al.  The psychology of moral development : the nature and validity of moral stages , 1984 .

[7]  Bertram F. Malle,et al.  A Theory of Blame , 2014 .

[8]  John Mikhail,et al.  Universal moral grammar: theory, evidence and the future , 2007, Trends in Cognitive Sciences.

[9]  Jonathan D. Cohen,et al.  Pushing moral buttons: The interaction between personal force and intention in moral judgment , 2009, Cognition.

[10]  Timothy D. Wilson,et al.  Telling more than we can know: Verbal reports on mental processes. , 1977 .

[11]  Matthias Scheutz,et al.  Sacrifice One For the Good of Many? People Apply Different Moral Norms to Human and Robot Agents , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[12]  Joshua D. Greene,et al.  A Dissociation Between Moral Judgments and Justifications , 2007 .

[13]  M. Alicke Culpable control and the psychology of blame. , 2000, Psychological bulletin.

[14]  Joseph M. Paxton,et al.  Reflection and Reasoning in Moral Judgment , 2022 .

[15]  J. Haidt The emotional dog and its rational tail: a social intuitionist approach to moral judgment. , 2001, Psychological review.

[16]  F. Cushman Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment , 2008, Cognition.

[17]  Fiery Cushman,et al.  Simulating murder: the aversion to harmful action. , 2012, Emotion.

[18]  Jonathan Haidt,et al.  Moral dumbfounding: when intuition finds no reason , 2000 .

[19]  Bertram F. Malle,et al.  Bringing free will down to Earth: People’s psychological concept of free will and its role in moral judgment , 2014, Consciousness and Cognition.

[20]  M. Hauser,et al.  The Role of Conscious Reasoning and Intuition in Moral Judgment , 2006, Psychological science.