Fair, Transparent, and Accountable Algorithmic Decision-making Processes

The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we provide an overview of available technical solutions to enhance fairness, accountability, and transparency in algorithmic decision-making. We also highlight the criticality and urgency to engage multi-disciplinary teams of researchers, practitioners, policy-makers, and citizens to co-develop, deploy, and evaluate in the real-world algorithmic decision-making processes designed to maximize fairness and transparency. In doing so, we describe the Open Algortihms (OPAL) project as a step towards realizing the vision of a world where data and algorithms are used as lenses and levers in support of democracy and development.

[1]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[2]  Bart W. Schermer,et al.  The limits of privacy in automated profiling and data mining , 2011, Comput. Law Secur. Rev..

[3]  Salvatore Ruggieri,et al.  A multidisciplinary survey on discrimination analysis , 2013, The Knowledge Engineering Review.

[4]  Kevin Macnish Unblinking eyes: the ethics of automating surveillance , 2012, Ethics and Information Technology.

[5]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[6]  Krishna P. Gummadi,et al.  Learning Fair Classifiers , 2015, 1507.05259.

[7]  Michael Luca,et al.  Supplemental Appendix for : Productivity and Selection of Human Capital with Machine Learning , 2016 .

[8]  A. Lo,et al.  Consumer Credit Risk Models Via Machine-Learning Algorithms , 2010 .

[9]  Toon Calders,et al.  Three naive Bayes approaches for discrimination-free classification , 2010, Data Mining and Knowledge Discovery.

[10]  P. Dent Animal Spirits – How Human Psychology Drives the Economy, and Why it Matters for Global Capitalism , 2010 .

[11]  Jun Sakuma,et al.  Fairness-Aware Classifier with Prejudice Remover Regularizer , 2012, ECML/PKDD.

[12]  J. Roemer Theories of Distributive Justice , 1997 .

[13]  Dan W. Brockt,et al.  The Theory of Justice , 2017 .

[14]  Jenna Burrell,et al.  How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .

[15]  K. Kawakami,et al.  Stereotyping, prejudice, and discrimination , 2014 .

[16]  Andrew D. Selbst,et al.  Big Data's Disparate Impact , 2016 .

[17]  Toon Calders,et al.  Why Unbiased Computational Processes Can Lead to Discriminative Decision Procedures , 2013, Discrimination and Privacy in the Information Society.

[18]  Latanya Sweeney,et al.  Discrimination in online ad delivery , 2013, CACM.

[19]  Jure Leskovec,et al.  Hidden factors and hidden topics: understanding rating dimensions with review text , 2013, RecSys.

[20]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.

[21]  Michael Carl Tschantz,et al.  Automated Experiments on Ad Privacy Settings , 2014, Proc. Priv. Enhancing Technol..

[22]  Frank A. Pasquale The Black Box Society: The Secret Algorithms That Control Money and Information , 2015 .

[23]  R. Dworkin Sovereign Virtue: The Theory and Practice of Equality , 2000 .

[24]  D. Boyd,et al.  CRITICAL QUESTIONS FOR BIG DATA , 2012 .

[25]  Toon Calders,et al.  Discrimination Aware Decision Tree Learning , 2010, 2010 IEEE International Conference on Data Mining.

[26]  Pablo J. Boczkowski,et al.  The Relevance of Algorithms , 2013 .

[27]  Thomas Poell,et al.  Understanding the promises and premises of online health platforms , 2016, Big Data Soc..

[28]  Sofia Ranchordás The Black Box Society: The Secret Algorithms That Control Money and Information , 2016 .

[29]  Suresh Venkatasubramanian,et al.  On the (im)possibility of fairness , 2016, ArXiv.

[30]  Karrie Karahalios,et al.  Auditing Algorithms : Research Methods for Detecting Discrimination on Internet Platforms , 2014 .

[31]  Tony Doyle,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017, Inf. Soc..

[32]  Tong Wang,et al.  Learning to Detect Patterns of Crime , 2013, ECML/PKDD.

[33]  Anthony F. Heath,et al.  Equality of Opportunity , 2017 .

[34]  Michael Kearns,et al.  Machine Learning for Market Microstructure and High Frequency Trading , 2013 .

[35]  Jon M. Kleinberg,et al.  Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.

[36]  K. Crawford,et al.  Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms , 2013 .

[37]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[38]  Francesco Bonchi,et al.  Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining , 2016, KDD.

[39]  A. Tversky,et al.  Judgment under Uncertainty: Heuristics and Biases , 1974, Science.

[40]  J. Reidenberg,et al.  Accountable Algorithms , 2016 .

[41]  Christos H. Papadimitriou,et al.  Strategic Classification , 2015, ITCS.

[42]  Seth Neel,et al.  Rawlsian Fairness for Machine Learning , 2016, ArXiv.

[43]  Daniel J. Lasser Topological ordering of a list of randomly-numbered elements of a network , 1961, CACM.

[44]  D. Pager,et al.  The Sociology of Discrimination: Racial Discrimination in Employment, Housing, Credit, and Consumer Markets. , 2008, Annual review of sociology.

[45]  Frank A. Pasquale,et al.  [89WashLRev0001] The Scored Society: Due Process for Automated Predictions , 2014 .

[46]  Zachary Chase Lipton The mythos of model interpretability , 2016, ACM Queue.

[47]  William Samuelson,et al.  Status quo bias in decision making , 1988 .

[48]  Michele Willson,et al.  Algorithms (and the) everyday , 2017, The Social Power of Algorithms.

[49]  Johannes Gehrke,et al.  Accurate intelligible models with pairwise interactions , 2013, KDD.

[50]  Alex Sandy Pentland,et al.  Saving big data from itself. , 2014, Scientific American.

[51]  Tal Z. Zarsky Automated prediction: perception, law, and policy , 2012, CACM.

[52]  Christa Tobler,et al.  Limits and potential of the concept of indirect discrimination , 2008 .

[53]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[54]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[55]  John David N. Dionisio,et al.  Case-based explanation of non-case-based learning methods , 1999, AMIA.

[56]  Nuria Oliver,et al.  The Tyranny of Data? The Bright and Dark Sides of Data-Driven Decision-Making for Social Good , 2016, ArXiv.

[57]  Nuria Oliver,et al.  MobiScore: Towards Universal Credit Scoring from Mobile Phone Data , 2015, UMAP.

[58]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[59]  G. Smith ANARCHY, STATE, AND UTOPIA , 1976 .

[60]  Franco Turini,et al.  Discrimination-aware data mining , 2008, KDD.