Risk of Injury in Moral Dilemmas With Autonomous Vehicles

As autonomous machines, such as automated vehicles (AVs) and robots, become pervasive in society, they will inevitably face moral dilemmas where they must make decisions that risk injuring humans. However, prior research has framed these dilemmas in starkly simple terms, i.e., framing decisions as life and death and neglecting the influence of risk of injury to the involved parties on the outcome. Here, we focus on this gap and present experimental work that systematically studies the effect of risk of injury on the decisions people make in these dilemmas. In four experiments, participants were asked to program their AVs to either save five pedestrians, which we refer to as the utilitarian choice, or save the driver, which we refer to as the nonutilitarian choice. The results indicate that most participants made the utilitarian choice but that this choice was moderated in important ways by perceived risk to the driver and risk to the pedestrians. As a second contribution, we demonstrate the value of formulating AV moral dilemmas in a game-theoretic framework that considers the possible influence of others’ behavior. In the fourth experiment, we show that participants were more (less) likely to make the utilitarian choice, the more utilitarian (nonutilitarian) other drivers behaved; furthermore, unlike the game-theoretic prediction that decision-makers inevitably converge to nonutilitarianism, we found significant evidence of utilitarianism. We discuss theoretical implications for our understanding of human decision-making in moral dilemmas and practical guidelines for the design of autonomous machines that solve these dilemmas while, at the same time, being likely to be adopted in practice.

[1]  P. Tetlock Thinking the unthinkable: sacred values and taboo cognitions , 2003, Trends in Cognitive Sciences.

[2]  Eva Krumhuber,et al.  The interpersonal effects of emotions in money versus candy games , 2018, Journal of Experimental Social Psychology.

[3]  Stacy Marsella,et al.  Human Cooperation When Acting Through Autonomous Machines , 2019, Proceedings of the National Academy of Sciences.

[4]  E. Rowland Theory of Games and Economic Behavior , 1946, Nature.

[5]  R. Hertwig,et al.  Experimental practices in economics: A methodological challenge for psychologists? , 2001, Behavioral and Brain Sciences.

[6]  Abraham M. Rutchick,et al.  Autonomous Vehicles and the Attribution of Moral Responsibility , 2019 .

[7]  Gordon Pipa,et al.  Human Decisions in Moral Dilemmas are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for Autonomous Driving Vehicles , 2018, Science and Engineering Ethics.

[8]  Dieter Schönecker,et al.  I M M A N U E L K A N T Groundwork of the Metaphysics of Morals , 2011 .

[9]  M. Mitchell Waldrop,et al.  Autonomous vehicles: No drivers required , 2015, Nature.

[10]  A. Roets,et al.  Of Mice, Men, and Trolleys: Hypothetical Judgment Versus Real-Life Behavior in Trolley-Style Moral Dilemmas , 2017, Psychological science.

[11]  L. Felkins The Social Dilemmas , 2015 .

[12]  Noah J. Goodall,et al.  Away from Trolley Problems and Toward Risk Management , 2016, Appl. Artif. Intell..

[13]  C. D. De Dreu Human Cooperation , 2013, Psychological science in the public interest : a journal of the American Psychological Society.

[14]  Iyad Rahwan,et al.  The social dilemma of autonomous vehicles , 2015, Science.

[15]  Jonathan D. Cohen,et al.  An fMRI Investigation of Emotional Engagement in Moral Judgment , 2001, Science.

[16]  Peter Singer,et al.  Famine, Affluence and Morality , 1986, Global Justice.

[17]  Robert J. Youmans,et al.  Technologically facilitated remoteness increases killing behavior , 2017 .

[18]  John Mikhail,et al.  Universal moral grammar: theory, evidence and the future , 2007, Trends in Cognitive Sciences.

[19]  Blake M. McKimmie,et al.  On being loud and proud: non-conformity and counter-conformity to group norms. , 2003, The British journal of social psychology.

[20]  Jan Gogoll,et al.  Autonomous Cars: In Favor of a Mandatory Ethics Setting , 2016, Science and Engineering Ethics.

[21]  P. Kollock SOCIAL DILEMMAS: The Anatomy of Cooperation , 1998 .

[22]  James J. Little,et al.  Real-Time Human Motion Capture with Multiple Depth Cameras , 2016, 2016 13th Conference on Computer and Robot Vision (CRV).

[23]  D. Cummins,et al.  Morality and conformity: The Asch paradigm applied to moral decisions , 2013 .

[24]  Daniel M. Bartels,et al.  The Costs and Benefits of Calculation and Moral Rules , 2009, Perspectives on psychological science : a journal of the Association for Psychological Science.

[25]  M. Bazerman,et al.  In Favor of Clear Thinking: Incorporating Moral Rules Into a Wise Cost-Benefit Analysis—Commentary on Bennis, Medin, & Bartels (2010) , 2010, Perspectives on psychological science : a journal of the Association for Psychological Science.

[26]  Vincent Conitzer,et al.  Moral Decision Making Frameworks for Artificial Intelligence , 2017, ISAIM.

[27]  J. Henrich,et al.  The Moral Machine experiment , 2018, Nature.

[28]  R. Crutchfield Conformity and character. , 1955 .

[29]  Jonathan Gratch,et al.  Interpersonal effects of expressed anger and sorrow in morally charged negotiation , 2014, Judgment and Decision Making.

[30]  Sarah C. Rom,et al.  The strategic moral self: Self-presentation shapes moral dilemma judgments , 2018 .

[31]  F. Hardin,et al.  What We Owe To Each Other. , 2018, Missouri medicine.

[32]  Ian S. Howard,et al.  Virtual Morality: Transitioning from Moral Judgment to Moral Action? , 2016, PloS one.

[33]  Patrick Lin Why Ethics Matters for Autonomous Cars , 2016 .

[34]  Boer Deng,et al.  Machine ethics: The robot’s dilemma , 2015, Nature.

[35]  Jeremy Bentham,et al.  The Collected Works of Jeremy Bentham: Deontology together with A Table of the Springs of Action and Article on Utilitarianism , 1983 .