Individual Fairness in Hindsight

Since many critical decisions impacting human lives are increasingly being made by algorithms, it is important to ensure that the treatment of individuals under such algorithms is demonstrably fair under reasonable notions of fairness. One compelling notion proposed in the literature is that of individual fairness (IF), which advocates that similar individuals should be treated similarly (Dwork et al. 2012). Originally proposed for offline decisions, this notion does not, however, account for temporal considerations relevant for online decision-making. In this paper, we extend the notion of IF to account for the time at which a decision is made, in settings where there exists a notion of conduciveness of decisions as perceived by the affected individuals. We introduce two definitions: (i) fairness-across-time (FT) and (ii) fairness-in-hindsight (FH). FT is the simplest temporal extension of IF where treatment of individuals is required to be individually fair relative to the past as well as future, while in FH, we require a one-sided notion of individual fairness that is defined relative to only the past decisions. We show that these two definitions can have drastically different implications in the setting where the principal needs to learn the utility model. Linear regret relative to optimal individually fair decisions is inevitable under FT for non-trivial examples. On the other hand, we design a new algorithm: Cautious Fair Exploration (CAFE), which satisfies FH and achieves sub-linear regret guarantees for a broad range of settings. We characterize lower bounds showing that these guarantees are order-optimal in the worst case. FH can thus be embedded as a primary safeguard against unfair discrimination in algorithmic deployments, without hindering the ability to take good decisions in the long-run.

[1]  Seth Neel,et al.  Rawlsian Fairness for Machine Learning , 2016, ArXiv.

[2]  Christopher Jung,et al.  Online Learning with an Unknown Fairness Metric , 2018, NeurIPS.

[3]  Christopher T. Lowenkamp,et al.  False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There's Software Used across the Country to Predict Future Criminals. and It's Biased against Blacks" , 2016 .

[4]  Christopher Jung,et al.  Fair Algorithms for Learning in Allocation Problems , 2018, FAT.

[5]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[6]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[7]  Franco Turini,et al.  Discrimination-aware data mining , 2008, KDD.

[8]  Sébastien Bubeck,et al.  Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems , 2012, Found. Trends Mach. Learn..

[9]  Andreas Krause,et al.  Preventing Disparate Treatment in Sequential Decision Making , 2018, IJCAI.

[10]  James H. Fowler,et al.  Abstract Available online at www.sciencedirect.com Social Networks 30 (2008) 16–30 The authority of Supreme Court precedent , 2022 .

[11]  Aaron Roth,et al.  Fairness in Learning: Classic and Contextual Bandits , 2016, NIPS.

[12]  Seth Neel,et al.  Fair Algorithms for Infinite and Contextual Bandits , 2016, 1610.09559.

[13]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[14]  H. E. Baber,et al.  Globalization and International Development: The Ethical Issues , 2013 .

[15]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[16]  Yiling Chen,et al.  Welfare and Distributional Impacts of Fair Classification , 2018, ArXiv.

[17]  Guy N. Rothblum,et al.  Probably Approximately Metric-Fair Learning , 2018, ICML.

[18]  Alexandra Chouldechova,et al.  Fairer and more accurate, but for whom? , 2017, ArXiv.

[19]  Jun Sakuma,et al.  Fairness-aware Learning through Regularization Approach , 2011, 2011 IEEE 11th International Conference on Data Mining Workshops.

[20]  Toon Calders,et al.  Classifying without discriminating , 2009, 2009 2nd International Conference on Computer, Control and Communication.

[21]  M. L. Friedland Prospective and Retrospective Judicial Lawmaking , 1974 .

[22]  Joseph W. Alba,et al.  Consumer Perceptions of Price (Un)Fairness , 2003 .

[23]  J. Rawls,et al.  Justice as Fairness: A Restatement , 2001 .

[24]  Latanya Sweeney,et al.  Discrimination in online ad delivery , 2013, CACM.

[25]  Aaron Roth,et al.  Fairness in Reinforcement Learning , 2016, ICML.

[26]  Anne Phillips,et al.  Defending equality of outcome , 2004 .

[27]  C. Dwork,et al.  Individual Fairness Under Composition , 2018 .

[28]  Jon M. Kleinberg,et al.  Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.

[29]  Toon Calders,et al.  Three naive Bayes approaches for discrimination-free classification , 2010, Data Mining and Knowledge Discovery.

[30]  Sharad Goel,et al.  The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning , 2018, ArXiv.

[31]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[32]  Kurt T. Lash The Cost of Judicial Error: Stare Decisis and the Role of Normative Theory , 2013 .

[33]  T. Wassmer 6 , 1900, EXILE.

[34]  Yang Liu,et al.  Calibrated Fairness in Bandits , 2017, ArXiv.

[35]  Nisheeth K. Vishnoi,et al.  An Algorithmic Framework to Control Bias in Bandit-based Personalization , 2018, ArXiv.