Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach

Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and complete transaction. Such a view is insufficient, especially when data are used in a secondary, noncontextual, and unpredictable manner—which is the inescapable nature of advanced artificial intelligence systems. We submit that an alternative view of informed consent—as an assurance of trust for incomplete transactions—allows for an understanding of why the rationale of informed consent already entails a right to ex post explanation.

[1]  Victo José da Silva Neto Platform capitalism , 2019, Revista Brasileira de Inovação.

[2]  Himabindu Lakkaraju,et al.  "How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations , 2019, AIES.

[3]  Kirsten E. Martin Breaking the Privacy Paradox: The Value of Privacy and Associated Duty of Firms , 2019, Business Ethics Quarterly.

[4]  Shoshana Zuboff The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power , 2019 .

[5]  Urbano Reviglio,et al.  Serendipity as an emerging design principle of the infosphere: challenges and opportunities , 2019, Ethics and Information Technology.

[6]  Glen Weyl,et al.  Radical Markets: Uprooting Capitalism and Democracy for a Just Society , 2018 .

[7]  Julia Powles,et al.  "Meaningful Information" and the Right to Explanation , 2017, FAT.

[8]  Luciano Floridi,et al.  Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .

[9]  T. Simpson Trust and Evidence , 2017 .

[10]  Eoin Carolan,et al.  The continuing problems with online consent under the EU's emerging data protection principles , 2016, Comput. Law Secur. Rev..

[11]  O. Ben-shahar,et al.  Simplification of Privacy Disclosures: An Experimental Test , 2016, The Journal of Legal Studies.

[12]  J. Burrell How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016, Big Data Soc..

[13]  Ryan W. Buell,et al.  Creating Reciprocal Value Through Operational Transparency , 2015, Manag. Sci..

[14]  Xinlei Chen,et al.  Visualizing and Understanding Neural Models in NLP , 2015, NAACL.

[15]  Gregory J. Park,et al.  Psychological Language on Twitter Predicts County-Level Heart Disease Mortality , 2015, Psychological science.

[16]  K. Crawford The Hidden Biases in Big Data , 2013 .

[17]  Johan Bollen,et al.  Twitter mood predicts the stock market , 2010, J. Comput. Sci..

[18]  Itzhak Gilboa,et al.  Simplicity and likelihood: An axiomatic approach , 2010, J. Econ. Theory.

[19]  Brendan T. O'Connor,et al.  From Tweets to Polls: Linking Text Sentiment to Public Opinion Time Series , 2010, ICWSM.

[20]  H. J. Hur,et al.  A Retrospective , 2009, Urban Environments and Health in the Philippines.

[21]  M. Smith Terrorism, Shared Rules and Trust† , 2008 .

[22]  Pamela Hieronymi,et al.  The reasons of trust , 2008 .

[23]  N. Smith The Categorical Apology , 2005 .

[24]  Richard Holton,et al.  Deciding to trust, coming to believe , 1994 .

[25]  J. Reidenberg,et al.  Accountable Algorithms , 2016 .

[26]  Kevin Gimpel,et al.  Word Salad : Relating Food Prices and Descriptions Supplementary Material , 2012 .

[27]  Brendan T. O'Connor,et al.  Predicting a Scientific Community’s Response to an Article , 2011, EMNLP.

[28]  P. Strawson Freedom and Resentment , 1962 .

[29]  Takeshi Sakaki,et al.  Earthquake shakes Twitter users: real-time event detection by social sensors , 2010, WWW '10.