Measuring Fairness in an Unfair World

Computer scientists have made great strides in characterizing different measures of algorithmic fairness, and showing that certain measures of fairness cannot be jointly satisfied. In this paper, I argue that the three most popular families of measures - unconditional independence, target-conditional independence and classification-conditional independence - make assumptions that are unsustainable in the context of an unjust world. I begin by introducing the measures and the implicit idealizations they make about the underlying causal structure of the contexts in which they are deployed. I then discuss how these idealizations fall apart in the context of historical injustice, ongoing unmodeled oppression, and the permissibility of using sensitive attributes to rectify injustice. In the final section, I suggest an alternative framework for measuring fairness in the context of existing injustice: distributive fairness.

[1]  Elizabeth Anderson,et al.  What Is the Point of Equality? , 1999, Ethics.

[2]  Reuben Binns,et al.  Fairness in Machine Learning: Lessons from Political Philosophy , 2017, FAT.

[3]  E. Glaeser,et al.  Inequality , 1998, Encyclopedia of the UN Sustainable Development Goals.

[4]  Christine E. Salboudis,et al.  The Imperative of Integration , 2013 .

[5]  P. Spirtes,et al.  Causation, Prediction, and Search, 2nd Edition , 2001 .

[6]  M. Kearns,et al.  Fairness in Criminal Justice Risk Assessments: The State of the Art , 2017, Sociological Methods & Research.

[7]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[8]  K. Lippert‐Rasmussen The badness of discrimination , 2006 .

[9]  Sharad Goel,et al.  The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning , 2018, ArXiv.

[10]  Richard J. Arneson Equality and equal opportunity for welfare , 1989 .

[11]  Suresh Venkatasubramanian,et al.  Equalizing Recourse across Groups , 2019, ArXiv.

[12]  Avi Feller,et al.  Algorithmic Decision Making and the Cost of Fairness , 2017, KDD.

[13]  A. Narayanan,et al.  Fairness and Machine Learning Limitations and Opportunities , 2018 .

[14]  S. Alkire Valuing Freedoms: Sen's Capability Approach and Poverty Reduction , 2002 .

[15]  Krishna P. Gummadi,et al.  A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity , 2018, ArXiv.

[16]  G. Coene,et al.  Women and human development: the capabilities approach , 2002 .

[17]  T. Beauchamp Distributive justice. , 2019, Bioethics digest.

[18]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[19]  J. Rawls,et al.  Justice as Fairness: A Restatement , 2001 .

[20]  Deborah Hellman,et al.  Measuring Algorithmic Fairness , 2019 .

[21]  Bernhard Schölkopf,et al.  Avoiding Discrimination through Causal Reasoning , 2017, NIPS.

[22]  Brian W. Powers,et al.  Dissecting racial bias in an algorithm used to manage the health of populations , 2019, Science.

[23]  A. Sen,et al.  Poverty and Famines. An Essay on Entitlement and Deprivation. , 1982 .

[24]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[25]  Jonathan Herington,et al.  Measuring the Biases that Matter: The Ethical and Casual Foundations for Measures of Fairness in Algorithms , 2019, FAT.

[26]  Jon M. Kleinberg,et al.  Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.

[27]  Andrew D. Selbst,et al.  Big Data's Disparate Impact , 2016 .

[28]  J. Pearl Causality: Models, Reasoning and Inference , 2000 .