There is no trade-off: enforcing fairness can improve accuracy

One of the main barriers to the broader adoption of algorithmic fairness in machine learning is the trade-off between fairness and performance of ML models: many practitioners are unwilling to sacrifice the performance of their ML model for fairness. In this paper, we show that this trade-off may not be necessary. If the algorithmic biases in an ML model are due to sampling biases in the training data, then enforcing algorithmic fairness may improve the performance of the ML model on unbiased test data. We study conditions under which enforcing algorithmic fairness helps practitioners learn the Bayes decision rule for (unbiased) test data from biased training data. We also demonstrate the practical implications of our theoretical results in real-world ML tasks.

[1]  J. Frédéric Bonnans,et al.  Perturbation Analysis of Optimization Problems , 2000, Springer Series in Operations Research.

[2]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[3]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[4]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[5]  Avrim Blum,et al.  Recovering from Biased Data: Can Fairness Constraints Improve Accuracy? , 2019, FORC.

[6]  Sharad Goel,et al.  The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning , 2018, ArXiv.

[7]  John Langford,et al.  A Reductions Approach to Fair Classification , 2018, ICML.

[8]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[9]  M. Kearns,et al.  Fairness in Criminal Justice Risk Assessments: The State of the Art , 2017, Sociological Methods & Research.

[10]  Pin-Yu Chen,et al.  An Information-Theoretic Perspective on the Relationship Between Fairness and Accuracy , 2019, ArXiv.

[11]  Jon M. Kleinberg,et al.  Selection Problems in the Presence of Implicit Bias , 2018, ITCS.

[12]  Frank A. Pasquale,et al.  [89WashLRev0001] The Scored Society: Due Process for Automated Predictions , 2014 .

[13]  Nisheeth K. Vishnoi,et al.  Interventions for ranking in the presence of implicit bias , 2020, FAT*.

[14]  Fei-Fei Li,et al.  Towards fairer datasets: filtering and balancing the distribution of the people subtree in the ImageNet hierarchy , 2019, FAT*.

[15]  Timnit Gebru,et al.  Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.

[16]  Benjamin Pfaff,et al.  Perturbation Analysis Of Optimization Problems , 2016 .

[17]  Michael Carl Tschantz,et al.  Discriminative but Not Discriminatory: A Comparison of Fairness Definitions under Different Worldviews , 2018, ArXiv.

[18]  P. Bickel,et al.  Sex Bias in Graduate Admissions: Data from Berkeley , 1975, Science.

[19]  Yaacov Ritov,et al.  On conditional parity as a notion of non-discrimination in machine learning , 2017, ArXiv.

[20]  Maya R. Gupta,et al.  Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals , 2018, J. Mach. Learn. Res..