'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions

Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.

[1]  Liliana Ardissono,et al.  Intrigue: Personalized recommendation of tourist attractions for desktop and hand held devices , 2003, Appl. Artif. Intell..

[2]  Lynne G. Zucker,et al.  Bureaucratic Justice: Managing Social Security Disability Claims. , 1984 .

[3]  Jonathan Crook,et al.  Credit Scoring and Its Applications, Second Edition , 2017, Mathematics in industry.

[4]  Franco Turini,et al.  Discrimination-aware data mining , 2008, KDD.

[5]  L. Richard Ye,et al.  The Impact of Explanation Facilities in User Acceptance of Expert System Advice , 1995, MIS Q..

[6]  Adrian Weller,et al.  Challenges for Transparency , 2017, ArXiv.

[7]  Batya Friedman,et al.  Human agency and responsible computing: Implications for computer system design , 1992, J. Syst. Softw..

[8]  Katharine Armstrong,et al.  Big data: a revolution that will transform how we live, work, and think , 2014 .

[9]  Kim Halskov,et al.  UX Design Innovation: Challenges for Working with Machine Learning as a Design Material , 2017, CHI.

[10]  Alexander Binder,et al.  Evaluating the Visualization of What a Deep Neural Network Has Learned , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[11]  Erran Carmel,et al.  Vehicle Telematics at an Italian Insurer: New Auto Insurance Products and a New Industry Ecosystem , 2012, MIS Q. Executive.

[12]  Serge Gutwirth,et al.  Profiling the European Citizen, Cross-Disciplinary Perspectives , 2008 .

[13]  Padraig Cunningham,et al.  A Case-Based Explanation System for Black-Box Systems , 2005, Artificial Intelligence Review.

[14]  Paul Dourish,et al.  Accounting for system behavior: representation, reflection, and resourceful action , 1997 .

[15]  Alexey Tsymbal,et al.  A Review of Explanation and Explanation in Case-Based Reasoning , 2003 .

[16]  William R. Swartout,et al.  A Reactive Approach to Explanation: Taking the User’s Feedback into Account , 1991 .

[17]  Been Kim,et al.  Inferring Team Task Plans from Human Meetings: A Generative Modeling Approach with Logic-Based Prior , 2015, J. Artif. Intell. Res..

[18]  Neil T. Heffernan,et al.  AXIS: Generating Explanations at Scale with Learnersourcing and Machine Learning , 2016, L@S.

[19]  Johanna D. Moore,et al.  Enhanced Maintenance and Explanation of Expert Systems Through Explicit Models of Their Development , 1984, IEEE Transactions on Software Engineering.

[20]  Been Kim,et al.  Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.

[21]  Joe Tullio,et al.  How it works: a field study of non-technical users interacting with an intelligent system , 2007, CHI.

[22]  Carmen Lacave,et al.  A review of explanation methods for Bayesian networks , 2002, The Knowledge Engineering Review.

[23]  Frank S. Bloch,et al.  Bureaucratic Justice: Managing Social Security Disability Claims , 1985 .

[24]  J. Bryson Robots should be slaves , 2010 .

[25]  Eric Evan Chen,et al.  Moral coherence processes: constructing culpability and consequences , 2015 .

[26]  Judith Masthoff,et al.  Explaining Recommendations: Design and Evaluation , 2015, Recommender Systems Handbook.

[27]  Bonnie M. Muir,et al.  Trust in automation. I: Theoretical issues in the study of trust and human intervention in automated systems , 1994 .

[28]  John Riedl,et al.  Tagsplanations: explaining recommendations using tags , 2009, IUI.

[29]  M. I. V. Eale,et al.  SLAVE TO THE ALGORITHM ? WHY A ‘ RIGHT TO AN EXPLANATION ’ IS PROBABLY NOT THE REMEDY YOU ARE LOOKING FOR , 2017 .

[30]  J. Brockner,et al.  An integrative framework for explaining reactions to decisions: interactive effects of outcomes and procedures. , 1996, Psychological bulletin.

[31]  Apurv Jain Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017 .

[32]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[33]  René F. Kizilcec How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface , 2016, CHI.

[34]  Jonathan N. Crook,et al.  Credit Scoring and Its Applications , 2002, SIAM monographs on mathematical modeling and computation.

[35]  Abdul V. Roudsari,et al.  Automation bias: a systematic review of frequency, effect mediators, and mitigators , 2012, J. Am. Medical Informatics Assoc..

[36]  Donald A. Norman,et al.  Some observations on mental models , 1987 .

[37]  Karrie Karahalios,et al.  First I "like" it, then I hide it: Folk Theories of Social Feeds , 2016, CHI.

[38]  Hilary Johnson,et al.  Explanation facilities and interactive systems , 1993, IUI '93.

[39]  Krishna P. Gummadi,et al.  The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making , 2016 .

[40]  Clifford Nass,et al.  Anthropomorphism, agency, and ethopoeia: computers as social actors , 1993, INTERCHI Adjunct Proceedings.

[41]  Stephen LaTour,et al.  Procedural Justice as Fairness , 1974 .

[42]  T. Tyler,et al.  The Social Psychology of Procedural Justice , 1988 .

[43]  E. Shortliffe,et al.  An analysis of physician attitudes regarding computer-based clinical consultation systems. , 1981, Computers and biomedical research, an international journal.

[44]  Been Kim,et al.  Inferring Robot Task Plans from Human Team Meetings: A Generative Modeling Approach with Logic-Based Prior , 2013, AAAI.

[45]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[46]  Viktor Mayer-Schnberger,et al.  Big Data: A Revolution That Will Transform How We Live, Work, and Think , 2013 .

[47]  Jenna Burrell,et al.  How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .

[48]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[49]  Serge Gutwirth,et al.  Profiling the European Citizen , 2017 .

[50]  Bernt Schiele,et al.  Towards improving trust in context-aware systems by displaying system confidence , 2005, Mobile HCI.

[51]  Krishna P. Gummadi,et al.  Exploring Explanations for Matrix Factorization Recommender Systems , 2017 .

[52]  J. Suykens,et al.  Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research , 2015, Eur. J. Oper. Res..

[53]  Jeremy P. Birnholtz,et al.  "Algorithms ruin everything": #RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media , 2017, CHI.

[54]  J. Greenberg,et al.  The social side of fairness: Interpersonal and informational classes of organizational justice. , 1993 .

[55]  D. Hsia,et al.  Credit Scoring and the Equal Credit Opportunity Act , 1978 .

[56]  Cynthia Rudin,et al.  Interpretable classification models for recidivism prediction , 2015, 1503.07810.

[57]  John Riedl,et al.  Explaining collaborative filtering recommendations , 2000, CSCW '00.

[58]  Paul M. Schwartz,et al.  Data Processing and Government Administration: The Failure of the American Legal Response to the Computer , 1991 .

[59]  Li Chen,et al.  Trust building with explanation interfaces , 2006, IUI '06.

[60]  Josep Domingo-Ferrer,et al.  Direct and Indirect Discrimination Prevention Methods , 2013, Discrimination and Privacy in the Information Society.

[61]  Clifford Nass,et al.  Computers are social actors , 1994, CHI '94.

[62]  Donald E. Conlon,et al.  Justice at the millennium: a meta-analytic review of 25 years of organizational justice research. , 2001, The Journal of applied psychology.

[63]  E. Lind,et al.  On the Role of Perceived Procedural Justice in Citizens' Reactions to Government Decisions and the Handling of Conflicts , 2014 .

[64]  J. Colquitt,et al.  Measuring Justice and Fairness , 2015 .

[65]  Joachim Diederich,et al.  The truth will come to light: directions and challenges in extracting the knowledge embedded within trained artificial neural networks , 1998, IEEE Trans. Neural Networks.

[66]  Lan Xia,et al.  The Price is Unfair! A Conceptual Framework of Price Fairness Perceptions , 2004 .

[67]  J. Tukey Comparing individual means in the analysis of variance. , 1949, Biometrics.

[68]  M. de Rijke,et al.  Do News Consumers Want Explanations for Personalized News Rankings , 2017 .

[69]  Gregory Mitchell,et al.  Libertarian Paternalism is an Oxymoron , 2004 .

[70]  Janni Nielsen,et al.  Getting access to what goes on in people's heads?: reflections on the think-aloud technique , 2002, NordiCHI '02.

[71]  Dympna O'Sullivan,et al.  The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems , 2015, 2015 International Conference on Healthcare Informatics.

[72]  Michael J. Tauber,et al.  Conference Companion on Human Factors in Computing Systems , 1996, CHI 1996.

[73]  Zachary Chase Lipton The mythos of model interpretability , 2016, ACM Queue.

[74]  Lise Getoor,et al.  User Preferences for Hybrid Explanations , 2017, RecSys.

[75]  Yair Zick,et al.  Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).

[76]  Padraig Cunningham,et al.  An Evaluation of the Usefulness of Case-Based Explanation , 2003, ICCBR.

[77]  Bettina Berendt,et al.  Better decision support through exploratory discrimination-aware data mining: foundations and empirical evidence , 2014, Artificial Intelligence and Law.

[78]  Mary E. Thomson,et al.  The relative influence of advice from human experts and statistical methods on forecast adjustments , 2009 .

[79]  R. Fildes,et al.  Effective forecasting and judgmental adjustments: an empirical evaluation and strategies for improvement in supply-chain planning , 2009 .

[80]  Alexander Binder,et al.  Explaining nonlinear classification decisions with deep Taylor decomposition , 2015, Pattern Recognit..

[81]  Kate Ehrlich,et al.  Taking advice from intelligent systems: the double-edged sword of explanations , 2011, IUI '11.

[82]  Luciano Floridi,et al.  Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .

[83]  Anind K. Dey,et al.  Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.

[84]  H. Wilke,et al.  How do I judge my outcome when I do not know the outcome of others? The psychology of the fair process effect. , 1997, Journal of personality and social psychology.

[85]  Achim Zeileis,et al.  Bias in random forest variable importance measures: Illustrations, sources and a solution , 2007, BMC Bioinformatics.

[86]  William J. Clancey,et al.  The Epistemology of a Rule-Based Expert System - A Framework for Explanation , 1981, Artif. Intell..

[87]  Cory Maloney Review of "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy" by Cathy O’Neil , 2017 .

[88]  Ben Shneiderman,et al.  Direct manipulation vs. interface agents , 1997, INTR.

[89]  G. Norman Likert scales, levels of measurement and the “laws” of statistics , 2010, Advances in health sciences education : theory and practice.