Algorithmic and Economic Perspectives on Fairness

Algorithmic systems have been used to inform consequential decisions for at least a century. Recidivism prediction dates back to the 1920s. Automated credit scoring dates began in the middle of the last century, but the last decade has witnessed an acceleration in the adoption of prediction algorithms. They are deployed to screen job applicants for the recommendation of products, people, and content, as well as in medicine (diagnostics and decision aids), criminal justice, facial recognition, lending and insurance, and the allocation of public services. The prominence of algorithmic methods has led to concerns regarding their systematic unfairness in their treatment of those whose behavior they are predicting. These concerns have found their way into the popular imagination through news accounts and general interest books. Even when these algorithms are deployed in domains subject to regulation, it appears that existing regulation is poorly equipped to deal with this issue. The word 'fairness' in this context is a placeholder for three related equity concerns. First, such algorithms may systematically discriminate against individuals with a common ethnicity, religion, or gender, irrespective of whether the relevant group enjoys legal protections. The second is that these algorithms fail to treat people as individuals. Third, who gets to decide how algorithms are designed and deployed. These concerns are present when humans, unaided, make predictions.

[1]  David S. Lee,et al.  Regression Discontinuity Designs in Economics , 2009 .

[2]  Harini Suresh,et al.  A Framework for Understanding Unintended Consequences of Machine Learning , 2019, ArXiv.

[3]  S. Matthew Weinberg,et al.  Subsidy Allocations in the Presence of Income Shocks , 2020, AAAI.

[4]  Cathy O'Neil,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2016, Vikalpa: The Journal for Decision Makers.

[5]  Inioluwa Deborah Raji,et al.  Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products , 2019, AIES.

[6]  David C. Parkes,et al.  Fairness without Harm: Decoupled Classifiers with Preference Guarantees , 2019, ICML.

[7]  D. Grusky Social Stratification: Class, Race, and Gender in Sociological Perspective , 1994 .

[8]  Alain Trannoy,et al.  Equality of Opportunity: Theory and Measurement , 2016 .

[9]  K. Lum,et al.  To predict and serve? , 2016 .

[10]  Deborah Estrin,et al.  How Intention Informed Recommendations Modulate Choices: A Field Study of Spoken Word Content , 2019, WWW.

[11]  Martijn C. Willemsen,et al.  Behaviorism is Not Enough: Better Recommendations through Listening to Users , 2016, RecSys.

[12]  J. Roemer,et al.  Equality of Opportunity , 2013 .

[13]  G. Smith ANARCHY, STATE, AND UTOPIA , 1976 .

[14]  Yiling Chen,et al.  Fair classification and social welfare , 2019, FAT*.

[15]  Timnit Gebru,et al.  Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.

[16]  Peter L. McCorkell The Impact of Credit Scoring and Automated Underwriting on Credit Availability , 2002 .

[17]  Andrew A. Bruce,et al.  The workings of the indeterminate-sentence law and the Parole system in Illinois : a report to the Honorable Hinton G. Clabaugh, chairman, Parole Board of Illinois , 1928 .

[18]  Amos J. Storkey,et al.  Censoring Representations with an Adversary , 2015, ICLR.

[19]  Bo Cowgill,et al.  Bias and Productivity in Humans and Algorithms: Theory and Evidence from Résumé Screening , 2018 .

[20]  Virginia E. Eubanks Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor , 2018 .

[21]  Jure Leskovec,et al.  Human Decisions and Machine Predictions , 2017, The quarterly journal of economics.

[22]  Krishna P. Gummadi,et al.  Fairness Constraints: Mechanisms for Fair Classification , 2015, AISTATS.

[23]  Ben Green,et al.  Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments , 2019, FAT.

[24]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[25]  John E. Roemer,et al.  Inequality of income acquisition: the role of childhood circumstances , 2015, Soc. Choice Welf..

[26]  Franco Turini,et al.  Discrimination-aware data mining , 2008, KDD.

[27]  Meredith Broussard,et al.  Artificial Unintelligence: How Computers Misunderstand the World , 2018 .

[28]  D. Fitch,et al.  Review of "Algorithms of oppression: how search engines reinforce racism," by Noble, S. U. (2018). New York, New York: NYU Press. , 2018, CDQR.

[29]  Tony Doyle,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017, Inf. Soc..

[30]  Suresh Venkatasubramanian,et al.  Runaway Feedback Loops in Predictive Policing , 2017, FAT.

[31]  Sampath Kannan,et al.  Fairness Incentives for Myopic Agents , 2017, EC.

[32]  Cynthia Rudin,et al.  Optimized Risk Scores , 2017, KDD.

[33]  Adam Tauman Kalai,et al.  Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.

[34]  Bo Cowgill The Impact of Algorithms on Judicial Discretion : Evidence from Regression Discontinuities , 2018 .

[35]  D. Deming,et al.  Stem Careers and the Changing Skill Requirements of Work , 2018, SSRN Electronic Journal.

[36]  K. Crenshaw Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics , 1989 .

[37]  Krikamol Muandet,et al.  Fair Decisions Despite Imperfect Predictions , 2019, AISTATS.

[38]  Damian Trilling,et al.  Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity , 2018 .

[39]  Gary S. Becker,et al.  The Economics of Discrimination , 1957 .

[40]  Adam Tauman Kalai,et al.  Decoupled Classifiers for Group-Fair and Efficient Machine Learning , 2017, FAT.

[41]  Toon Calders,et al.  Three naive Bayes approaches for discrimination-free classification , 2010, Data Mining and Knowledge Discovery.

[42]  Sharad Goel,et al.  The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning , 2018, ArXiv.

[43]  A. Guseva,et al.  Creditworthy: A History of Consumer Surveillance and Financial Identity in America , 2019, Contemporary Sociology: A Journal of Reviews.

[44]  Stephen Coate,et al.  Will Affirmative-Action Policies Eliminate Negative Stereotypes? , 1993 .

[45]  Avi Feller,et al.  Algorithmic Decision Making and the Cost of Fairness , 2017, KDD.

[46]  Peter Cappelli,et al.  Artificial Intelligence in Human Resources Management: Challenges and a Path Forward , 2019, California Management Review.

[47]  Solon Barocas,et al.  Prediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions , 2018, 1811.07867.

[48]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[49]  Matthew Gentzkow,et al.  Is the Internet Causing Political Polarization? Evidence from Demographics , 2017 .

[50]  Sendhil Mullainathan,et al.  Algorithmic Fairness and the Social Welfare Function , 2018, EC.

[51]  Elio D. Monachesi American Studies in the Prediction of Recidivism , 1950 .

[52]  Latanya Sweeney,et al.  Discrimination in online ad delivery , 2013, CACM.

[53]  Xiangliang Zhang,et al.  Decision Theory for Discrimination-Aware Classification , 2012, 2012 IEEE 12th International Conference on Data Mining.

[54]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[55]  Ben Hutchinson,et al.  50 Years of Test (Un)fairness: Lessons for Machine Learning , 2018, FAT.

[56]  Nathan Kallus,et al.  Residual Unfairness in Fair Machine Learning from Prejudiced Data , 2018, ICML.

[57]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[58]  M. Stevenson,et al.  Assessing Risk Assessment in Action , 2018 .

[59]  Hannah R Sullivan,et al.  Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI? , 2019, AMA journal of ethics.

[60]  Kartik Hosanagar,et al.  Blockbuster Culture's Next Rise or Fall: The Impact of Recommender Systems on Sales Diversity , 2007, Manag. Sci..

[61]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[62]  Loren G. Terveen,et al.  Exploring the filter bubble: the effect of using recommender systems on content diversity , 2014, WWW.

[63]  Kira Goldner,et al.  Mechanism design for social good , 2018, SIGAI.