Deliberately prejudiced self-driving vehicles elicit the most outrage

Should self-driving vehicles be prejudiced, e.g., deliberately harm the elderly over young children? When people make such forced-choices on the vehicle's behalf, they exhibit systematic preferences (e.g., favor young children), yet when their options are unconstrained they favor egalitarianism. So, which of these response patterns should guide AV programming and policy? We argue that this debate is missing the public reaction most likely to threaten the industry's life-saving potential: moral outrage. We find that people are more outraged by AVs that kill discriminately than indiscriminately. Crucially, they are even more outraged by an AV that deliberately kills a less preferred group (e.g., an elderly person over a child) than by one that indiscriminately kills a more preferred group (e.g., a child). Thus, at least insofar as the public is concerned, there may be more reason to depict and program AVs as egalitarian.

[1]  M. Crockett,et al.  Moral outrage in the digital age , 2017, Nature Human Behaviour.

[2]  J. Henrich,et al.  Reply to: Life and death decisions of autonomous vehicles , 2020, Nature.

[3]  Emilio Frazzoli,et al.  Liability, Ethics, and Culture-Aware Behavior Specification using Rulebooks , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[4]  Kyle A. Thomas,et al.  Recursive mentalizing and common knowledge in the bystander effect. , 2016, Journal of experimental psychology. General.

[5]  Joshua D. Greene,et al.  Veil-of-ignorance reasoning favors the greater good , 2019, Proceedings of the National Academy of Sciences.

[6]  J. Fleetwood Public Health, Ethics, and Autonomous Vehicles. , 2017, American journal of public health.

[7]  Mina Cikara,et al.  On the wrong side of the trolley track: neural correlates of relative social valuation. , 2010, Social cognitive and affective neuroscience.

[8]  David G. Rand,et al.  Signaling when no one is watching: A reputation heuristics account of outrage and punishment in one-shot anonymous interactions. , 2019, Journal of personality and social psychology.

[9]  G. Alvarez,et al.  Doubting Driverless Dilemmas , 2020, Perspectives on Psychological Science.

[10]  J. Savulescu,et al.  From public preferences to ethical policy , 2019, Nature Human Behaviour.

[11]  O. John,et al.  Using reappraisal to regulate negative emotion after the 2016 U.S. Presidential election: Does emotion regulation Trump political action? , 2019, Journal of personality and social psychology.

[12]  Santokh Singh,et al.  Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey , 2015 .

[13]  Bryant Walker Smith Ethics of Artificial Intelligence in Transport , 2019 .

[14]  Yochanan E. Bigman,et al.  Life and death decisions of autonomous vehicles , 2020, Nature.

[15]  P. Tetlock Thinking the unthinkable: sacred values and taboo cognitions , 2003, Trends in Cognitive Sciences.

[16]  J. Henrich,et al.  The Moral Machine experiment , 2018, Nature.

[17]  M. Cikara,et al.  The Upside of Outrage , 2018, Trends in Cognitive Sciences.