Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach

Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and complete transaction. Such a view is insufficient, especially when data are used in a secondary, noncontextual, and unpredictable manner—which is the inescapable nature of advanced artificial intelligence systems. We submit that an alternative view of informed consent—as an assurance of trust for incomplete transactions—allows for an understanding of why the rationale of informed consent already entails a right to ex post explanation.

[1]  Luciano Floridi,et al.  Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .

[2]  Urbano Reviglio,et al.  Serendipity as an emerging design principle of the infosphere: challenges and opportunities , 2019, Ethics and Information Technology.

[3]  Julia Powles,et al.  "Meaningful Information" and the Right to Explanation , 2017, FAT.

[4]  Eoin Carolan,et al.  The continuing problems with online consent under the EU's emerging data protection principles , 2016, Comput. Law Secur. Rev..

[5]  Itzhak Gilboa,et al.  Simplicity and likelihood: An axiomatic approach , 2010, J. Econ. Theory.

[6]  Kevin Gimpel,et al.  Word Salad : Relating Food Prices and Descriptions Supplementary Material , 2012 .

[7]  N. Smith The Categorical Apology , 2005 .

[8]  P. Strawson Freedom and Resentment , 1962 .

[9]  Richard Holton,et al.  Deciding to trust, coming to believe , 1994 .

[10]  Yutaka Matsuo,et al.  Earthquake shakes Twitter users: real-time event detection by social sensors , 2010, WWW '10.

[11]  Himabindu Lakkaraju,et al.  "How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations , 2019, AIES.

[12]  Johan Bollen,et al.  Twitter mood predicts the stock market , 2010, J. Comput. Sci..

[13]  J. Reidenberg,et al.  Accountable Algorithms , 2016 .

[14]  Shoshana Zuboff The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power , 2019 .

[15]  Brendan T. O'Connor,et al.  Predicting a Scientific Community’s Response to an Article , 2011, EMNLP.

[16]  K. Crawford The Hidden Biases in Big Data , 2013 .

[17]  Kirsten E. Martin Breaking the Privacy Paradox: The Value of Privacy and Associated Duty of Firms , 2020, Business Ethics Quarterly.

[18]  Chia-Jung Tsay,et al.  Creating Reciprocal Value Through Operational Transparency , 2015, Manag. Sci..

[19]  M. Smith Terrorism, Shared Rules and Trust† , 2008 .

[20]  Jenna Burrell,et al.  How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .

[21]  Xinlei Chen,et al.  Visualizing and Understanding Neural Models in NLP , 2015, NAACL.

[22]  T. Simpson Trust and Evidence , 2017 .

[23]  Brendan T. O'Connor,et al.  From Tweets to Polls: Linking Text Sentiment to Public Opinion Time Series , 2010, ICWSM.

[24]  Glen Weyl,et al.  Radical Markets: Uprooting Capitalism and Democracy for a Just Society , 2018 .

[25]  R. Tibshirani,et al.  Regression shrinkage and selection via the lasso: a retrospective , 2011 .

[26]  Victo José da Silva Neto Platform capitalism , 2019, Revista Brasileira de Inovação.

[27]  O. Ben-shahar,et al.  Simplification of Privacy Disclosures: An Experimental Test , 2016, The Journal of Legal Studies.

[28]  Gregory J. Park,et al.  Psychological Language on Twitter Predicts County-Level Heart Disease Mortality , 2015, Psychological science.

[29]  Pamela Hieronymi,et al.  The reasons of trust , 2008 .