Fairness Under Composition

Algorithmic fairness, and in particular the fairness of scoring and classification algorithms, has become a topic of increasing social concern and has recently witnessed an explosion of research in theoretical computer science, machine learning, statistics, the social sciences, and law. Much of the literature considers the case of a single classifier (or scoring function) used once, in isolation. In this work, we initiate the study of the fairness properties of systems composed of algorithms that are fair in isolation; that is, we study fairness under composition. We identify pitfalls of naive composition and give general constructions for fair composition, demonstrating both that classifiers that are fair in isolation do not necessarily compose into fair systems and also that seemingly unfair components may be carefully combined to construct fair systems. We focus primarily on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], but also extend our results to a large class of group fairness definitions popular in the recent literature, exhibiting several cases in which group fairness definitions give misleading signals under composition.

[1]  Guy N. Rothblum,et al.  Calibration for the (Computationally-Identifiable) Masses , 2017, ArXiv.

[2]  Toon Calders,et al.  Classifying without discriminating , 2009, 2009 2nd International Conference on Computer, Control and Communication.

[3]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[4]  Guy N. Rothblum,et al.  Fairness Through Computationally-Bounded Awareness , 2018, NeurIPS.

[5]  Yaacov Ritov,et al.  On conditional parity as a notion of non-discrimination in machine learning , 2017, ArXiv.

[6]  Catherine Tucker,et al.  Algorithmic bias? An empirical study into apparent gender-based discrimination in the display of STEM career ads , 2019 .

[7]  Cynthia Dwork,et al.  Differential Privacy , 2006, ICALP.

[8]  Bernhard Schölkopf,et al.  Avoiding Discrimination through Causal Reasoning , 2017, NIPS.

[9]  Franco Turini,et al.  Discrimination-aware data mining , 2008, KDD.

[10]  Yiling Chen,et al.  Fairness at Equilibrium in the Labor Market , 2017, ArXiv.

[11]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[12]  Suresh Venkatasubramanian,et al.  Fair Pipelines , 2017, ArXiv.

[13]  Aaron Roth,et al.  The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..

[14]  Jon M. Kleinberg,et al.  Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.

[15]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[16]  Matt J. Kusner,et al.  Counterfactual Fairness , 2017, NIPS.

[17]  Peter Kuhn,et al.  Gender Discrimination in Job Ads: Evidence from China* , 2013 .

[18]  Toniann Pitassi,et al.  Learning Adversarially Fair and Transferable Representations , 2018, ICML.

[19]  Esther Rolf,et al.  Delayed Impact of Fair Machine Learning , 2018, ICML.

[20]  Christopher Jung,et al.  Online Learning with an Unknown Fairness Metric , 2018, NeurIPS.

[21]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[22]  Jun Sakuma,et al.  Fairness-aware Learning through Regularization Approach , 2011, 2011 IEEE 11th International Conference on Data Mining Workshops.

[23]  Michael Carl Tschantz,et al.  Automated Experiments on Ad Privacy Settings , 2014, Proc. Priv. Enhancing Technol..

[24]  Seth Neel,et al.  Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness , 2017, ICML.