Paradoxes in Fair Machine Learning

Equalized odds is a statistical notion of fairness in machine learning that ensures that classification algorithms do not discriminate against protected groups. We extend equalized odds to the setting of cardinality-constrained fair classification, where we have a bounded amount of a resource to distribute. This setting coincides with classic fair division problems, which allows us to apply concepts from that literature in parallel to equalized odds. In particular, we consider the axioms of resource monotonicity, consistency, and population monotonicity, all three of which relate different allocation instances to prevent paradoxes. Using a geometric characterization of equalized odds, we examine the compatibility of equalized odds with these axioms. We empirically evaluate the cost of allocation rules that satisfy both equalized odds and axioms of fair division on a dataset of FICO credit scores.

[1]  R. Avery,et al.  Credit Scoring and Its Effects on the Availability and Affordability of Credit , 2009 .

[2]  D. Foley Resource allocation and the public sector , 1967 .

[3]  Jon M. Kleinberg,et al.  On Fairness and Calibration , 2017, NIPS.

[4]  Steven J. Brams,et al.  Fair division - from cake-cutting to dispute resolution , 1998 .

[5]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[6]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[7]  D. Marc Kilgour,et al.  Handbook of Group Decision and Negotiation , 2014, Advances in Group Decision and Negotiation.

[8]  Dimitris Bertsimas,et al.  The Price of Fairness , 2011, Oper. Res..

[9]  Risto Miikkulainen,et al.  GRADE: Machine Learning Support for Graduate Admissions , 2013, AI Mag..

[10]  A. Azzouz 2011 , 2020, City.

[11]  Steven J. Brams Fair Division* , 2009, Encyclopedia of Complexity and Systems Science.

[12]  S. M. García,et al.  2014: , 2020, A Party for Lazarus.

[13]  Jure Leskovec,et al.  Human Decisions and Machine Predictions , 2017, The quarterly journal of economics.

[14]  Krishna P. Gummadi,et al.  Fairness Constraints: Mechanisms for Fair Classification , 2015, AISTATS.

[15]  Jon M. Kleinberg,et al.  Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.

[16]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[17]  Hervé Moulin,et al.  Fair division and collective welfare , 2003 .

[18]  Lawrence G. Sager Handbook of Computational Social Choice , 2015 .

[19]  Ioannis Caragiannis,et al.  The Efficiency of Fair Division , 2009, Theory of Computing Systems.

[20]  Avi Feller,et al.  Algorithmic Decision Making and the Cost of Fairness , 2017, KDD.

[21]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[22]  W. Thomson Fair Allocation Rules , 2011 .

[23]  Naeem Siddiqi,et al.  Credit Risk Scorecards: Developing and Implementing Intelligent Credit Scoring , 2005 .

[24]  Maria-Florina Balcan,et al.  Envy-Free Classification , 2018, NeurIPS.

[25]  Toon Calders,et al.  Building Classifiers with Independency Constraints , 2009, 2009 IEEE International Conference on Data Mining Workshops.

[26]  John von Neumann,et al.  1. A Certain Zero-sum Two-person Game Equivalent to the Optimal Assignment Problem , 1953 .