Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function)

Utility functions or their equivalents (value functions, objective functions, loss functions, reward functions, preference orderings) are a central tool in most current machine learning systems. These mechanisms for defining goals and guiding optimization run into practical and conceptual difficulty when there are independent, multi-dimensional objectives that need to be pursued simultaneously and cannot be reduced to each other. Ethicists have proved several impossibility theorems that stem from this origin; those results appear to show that there is no way of formally specifying what it means for an outcome to be good for a population without violating strong human ethical intuitions (in such cases, the objective function is a social welfare function). We argue that this is a practical problem for any machine learning system (such as medical decision support systems or autonomous weapons) or rigidly rule-based bureaucracy that will make high stakes decisions about human lives: such systems should not use objective functions in the strict mathematical sense. We explore the alternative of using uncertain objectives, represented for instance as partially ordered preferences, or as probability distributions over total orders. We show that previously known impossibility theorems can be transformed into uncertainty theorems in both of those settings, and prove lower bounds on how much uncertainty is implied by the impossibility results. We close by proposing two conjectures about the relationship between uncertainty in objectives and severe unintended consequences from AI systems.

[1]  Stephen M. Omohundro,et al.  The Basic AI Drives , 2008, AGI.

[2]  John D. Harman,et al.  Totalitarian Language: Orwell's Newspeak and its Nazi and Communist Antecedents , 1993 .

[3]  Sven Nyholm,et al.  The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem? , 2016, Ethical Theory and Moral Practice.

[4]  Jan Leike,et al.  A Definition of Happiness for Reinforcement Learning Agents , 2015, AGI.

[5]  Jessica Taylor,et al.  Quantilizers: A Safer Alternative to Maximizers for Limited Optimization , 2016, AAAI Workshop: AI, Ethics, and Society.

[6]  Jeff A. Bilmes,et al.  Simultaneous Learning and Covering with Adversarial Noise , 2011, ICML.

[7]  G. Stigler The Development of Utility Theory. I , 1950, Journal of Political Economy.

[8]  Patrick Lin Why Ethics Matters for Autonomous Cars , 2016 .

[9]  Michael Huemer,et al.  In Defence of Repugnance , 2008 .

[10]  Ronald C. Arkin,et al.  Governing Lethal Behavior in Autonomous Robots , 2009 .

[11]  Geir B. Asheim,et al.  ON INTERGENERATIONAL EQUITY , 2003 .

[12]  Eliezer Yudkowsky,et al.  Complex Value Systems in Friendly AI , 2011, AGI.

[13]  Nick Bostrom,et al.  Ethical Issues in Advanced Artificial Intelligence , 2020 .

[14]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[15]  Raja Parasuraman,et al.  Complacency and Bias in Human Use of Automation: An Attentional Integration , 2010, Hum. Factors.

[16]  Gustaf Arrhenius,et al.  The Paradoxes Of Future Generations And Normative Theory , 2004 .

[17]  Jon M. Kleinberg,et al.  Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.

[18]  K. Arrow A Difficulty in the Concept of Social Welfare , 1950, Journal of Political Economy.

[19]  C. Robert Superintelligence: Paths, Dangers, Strategies , 2017 .

[20]  Yew-Kwang Ng,et al.  What Should We Do About Future Generations? , 1989, Economics and Philosophy.

[21]  Iyad Rahwan,et al.  The social dilemma of autonomous vehicles , 2015, Science.

[22]  Iyad Rahwan,et al.  Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars? , 2015, ArXiv.

[23]  George F. Schumm Transitivity, preference and indifference , 1987 .

[24]  Scott Sanner,et al.  Real-time Multiattribute Bayesian Preference Elicitation with Pairwise Comparison Queries , 2010, AISTATS.

[25]  Heather Roff,et al.  The Strategic Robot Problem: Lethal Autonomous Weapons in War , 2014 .

[26]  Gustaf Arrhenius,et al.  An Impossibility Theorem for Welfarist Axiologies , 2000, Economics and Philosophy.

[27]  L. Riek,et al.  A Code of Ethics for the Human-Robot Interaction Profession , 2014 .

[28]  R. Domen,et al.  The Ethics of Ambiguity , 2016, Academic pathology.

[29]  J. C. Gerdes,et al.  Implementable Ethics for Autonomous Vehicles , 2016 .

[30]  Anne Lauscher Life 3.0: being human in the age of artificial intelligence , 2019, Internet Histories.

[31]  Paul N. Bennett,et al.  Active Comparison Based Learning Incorporating User Uncertainty and Noise , 2016 .

[32]  Owain Evans,et al.  Trial without Error: Towards Safe Reinforcement Learning via Human Intervention , 2017, AAMAS.

[33]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[34]  J. Broome The Economic Value of Life , 1985 .

[35]  Selmer Bringsjord,et al.  On Automating the Doctrine of Double Effect , 2017, IJCAI.

[36]  Kevin Barraclough,et al.  I and i , 2001, BMJ : British Medical Journal.

[37]  Alda Lopes Gançarski,et al.  A Contextual-Bandit Algorithm for Mobile Context-Aware Recommender System , 2012, ICONIP.

[38]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[39]  Noah J. Goodall,et al.  Machine Ethics and Automated Vehicles , 2020, ArXiv.

[40]  Anca D. Dragan,et al.  Cooperative Inverse Reinforcement Learning , 2016, NIPS.

[41]  A. Baier,et al.  Reasons and Persons , 1984 .

[42]  Shane Legg,et al.  Deep Reinforcement Learning from Human Preferences , 2017, NIPS.