The effects of competition and regulation on error inequality in data-driven markets

Recent work has documented instances of unfairness in deployed machine learning models, and significant researcher effort has been dedicated to creating algorithms that intrinsically consider fairness. In this work, we highlight another source of unfairness: market forces that drive differential investment in the data pipeline for differing groups. We develop a high-level model to study this question. First, we show that our model predicts unfairness in a monopoly setting. Then, we show that under all but the most extreme models, competition does not eliminate this tendency, and may even exacerbate it. Finally, we consider two avenues for regulating a machine-learning driven monopolist - relative error inequality and absolute error-bounds - and quantify the price of fairness (and who pays it). These models imply that mitigating fairness concerns may require policy-driven solutions, not only technological ones.

[1]  Rajiv D. Banker,et al.  Quality and Competition , 1998 .

[2]  Steven T. Berry Estimating Discrete-Choice Models of Product Differentiation , 1994 .

[3]  K. Lum,et al.  To predict and serve? , 2016 .

[4]  Nathan Kallus,et al.  Residual Unfairness in Fair Machine Learning from Prejudiced Data , 2018, ICML.

[5]  Yishay Mansour,et al.  Competing Bandits: Learning Under Competition , 2017, ITCS.

[6]  M. Dufwenberg Game theory. , 2011, Wiley interdisciplinary reviews. Cognitive science.

[7]  Danah Boyd,et al.  Gaps in Information Access in Social Networks? , 2019, WWW.

[8]  Moshe Tennenholtz,et al.  Competing Prediction Algorithms , 2018, ArXiv.

[9]  George Hendrikse,et al.  The Theory of Industrial Organization , 1989 .

[10]  Shai Ben-David,et al.  Understanding Machine Learning: From Theory to Algorithms , 2014 .

[11]  Sendhil Mullainathan,et al.  Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People , 2019, FAT.

[12]  Timnit Gebru,et al.  Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.

[13]  Julia Rubin,et al.  Fairness Definitions Explained , 2018, 2018 IEEE/ACM International Workshop on Software Fairness (FairWare).

[14]  Kenneth L. Judd,et al.  COURNOT VERSUS BERTRAND: A DYNAMIC RESOLUTION , 1996 .

[15]  Christopher Jung,et al.  Fair Algorithms for Learning in Allocation Problems , 2018, FAT.

[16]  Umesh V. Vazirani,et al.  An Introduction to Computational Learning Theory , 1994 .

[17]  Latanya Sweeney,et al.  Discrimination in online ad delivery , 2013, CACM.

[18]  Michael Kearns,et al.  Equilibrium Characterization for Data Acquisition Games , 2019, IJCAI.

[19]  Suresh Venkatasubramanian,et al.  Runaway Feedback Loops in Predictive Policing , 2017, FAT.

[20]  Fernando Diaz,et al.  Auditing Search Engines for Differential Satisfaction Across Demographics , 2017, WWW.

[21]  Judy Hoffman,et al.  Predictive Inequity in Object Detection , 2019, ArXiv.

[22]  Zhiwei Steven Wu,et al.  Competing Bandits: The Perils of Exploration under Competition , 2019, ArXiv.

[23]  G. Tullock Efficient Rent Seeking , 2001 .

[24]  Maria Soledad Pera,et al.  All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness , 2018, FAT.

[25]  Reuben Binns,et al.  Fairness in Machine Learning: Lessons from Political Philosophy , 2017, FAT.

[26]  Michael Kearns,et al.  Fair Algorithms for Machine Learning , 2017, EC.

[27]  M. Kearns,et al.  Fairness in Criminal Justice Risk Assessments: The State of the Art , 2017, Sociological Methods & Research.

[28]  David Sontag,et al.  Why Is My Classifier Discriminatory? , 2018, NeurIPS.

[29]  C. Shapiro Theories of oligopoly behavior , 1989 .

[30]  Brendan T. O'Connor,et al.  Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English , 2017, ArXiv.