Inherent Trade-offs in the Fair Allocation of Treatments

Explicit and implicit bias clouds human judgement, leading to discriminatory treatment of minority groups. A fundamental goal of algorithmic fairness is to avoid the pitfalls in human judgement by learning policies that improve the overall outcomes while providing fair treatment to protected classes. In this paper, we propose a causal framework that learns optimal intervention policies from data subject to fairness constraints. We define two measures of treatment bias and infer best treatment assignment that minimizes the bias while optimizing overall outcome. We demonstrate that there is a dilemma of balancing fairness and overall benefit; however, allowing preferential treatment to protected classes in certain circumstances (affirmative action) can dramatically improve the overall benefit while also preserving fairness. We apply our framework to data containing student outcomes on standardized tests and show how it can be used to design real-world policies that fairly improve student test scores. Our framework provides a principled way to learn fair treatment policies in real-world settings.

[1]  Lu Zhang,et al.  A Causal Framework for Discovering and Removing Direct and Indirect Discrimination , 2016, IJCAI.

[2]  Krishna P. Gummadi,et al.  Fairness Constraints: Mechanisms for Fair Classification , 2015, AISTATS.

[3]  Jon M. Kleinberg,et al.  Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.

[4]  Stefano Ermon,et al.  Fair Generative Modeling via Weak Supervision , 2020, ICML.

[5]  Ilya Shpitser,et al.  Learning Optimal Fair Policies , 2018, ICML.

[6]  Peter Kairouz,et al.  Censored and Fair Universal Representations using Generative Adversarial Models , 2019 .

[7]  Ezekiel J Emanuel,et al.  Fair Allocation of Scarce Medical Resources in the Time of Covid-19. , 2020, The New England journal of medicine.

[8]  Avi Feller,et al.  Algorithmic Decision Making and the Cost of Fairness , 2017, KDD.

[9]  Silvia Chiappa,et al.  Path-Specific Counterfactual Fairness , 2018, AAAI.

[10]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[11]  Krishna P. Gummadi,et al.  From Parity to Preference-based Notions of Fairness in Classification , 2017, NIPS.

[12]  M. Howell,et al.  Ensuring Fairness in Machine Learning to Advance Health Equity , 2018, Annals of Internal Medicine.

[13]  Rob Brekelmans,et al.  Invariant Representations without Adversarial Training , 2018, NeurIPS.

[14]  Aditya Krishna Menon,et al.  The cost of fairness in binary classification , 2018, FAT.

[15]  Jon Kleinberg,et al.  Fairness and utilization in allocating resources with uncertain demand , 2019, FAT*.

[16]  Susan Athey,et al.  Recursive partitioning for heterogeneous causal effects , 2015, Proceedings of the National Academy of Sciences.

[17]  Kristina Lerman,et al.  A Geometric Solution to Fair Representations , 2020, AIES.

[18]  Stefan Wager,et al.  Estimation and Inference of Heterogeneous Treatment Effects using Random Forests , 2015, Journal of the American Statistical Association.

[19]  Sören R. Künzel,et al.  Metalearners for estimating heterogeneous treatment effects using machine learning , 2017, Proceedings of the National Academy of Sciences.

[20]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[21]  James R. Foulds,et al.  An Intersectional Definition of Fairness , 2018, 2020 IEEE 36th International Conference on Data Engineering (ICDE).

[22]  Matt J. Kusner,et al.  Counterfactual Fairness , 2017, NIPS.

[23]  Brian W. Powers,et al.  Dissecting racial bias in an algorithm used to manage the health of populations , 2019, Science.

[24]  Alexandra Chouldechova,et al.  A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions , 2018, FAT.

[25]  Christopher Jung,et al.  Fair Algorithms for Learning in Allocation Problems , 2018, FAT.

[26]  Ilya Shpitser,et al.  Fair Inference on Outcomes , 2017, AAAI.

[27]  S. Athey,et al.  Generalized random forests , 2016, The Annals of Statistics.

[28]  Xintao Wu,et al.  FairGAN+: Achieving Fair Data Generation and Classification through Generative Adversarial Nets , 2019, 2019 IEEE International Conference on Big Data (Big Data).

[29]  Max Welling,et al.  The Variational Fair Autoencoder , 2015, ICLR.

[30]  Yanyan Wang,et al.  Measuring and Achieving Equity in Multiperiod Emergency Material Allocation , 2019, Risk analysis : an official publication of the Society for Risk Analysis.

[31]  Julia Rubin,et al.  Fairness Definitions Explained , 2018, 2018 IEEE/ACM International Workshop on Software Fairness (FairWare).