Preference-informed fairness

In this work, we study notions of fairness in decision-making systems when individuals have diverse preferences over the possible outcomes of the decisions. Our starting point is the seminal work of Dwork et al. [ITCS 2012] which introduced a notion of individual fairness (IF): given a task-specific similarity metric, every pair of individuals who are similarly qualified according to the metric should receive similar outcomes. We show that when individuals have diverse preferences over outcomes, requiring IF may unintentionally lead to less-preferred outcomes for the very individuals that IF aims to protect (e.g. a protected minority group). A natural alternative to IF is the classic notion of fair division, envy-freeness (EF): no individual should prefer another individual's outcome over their own. Although EF allows for solutions where all individuals receive a highly-preferred outcome, EF may also be overly-restrictive for the decision-maker. For instance, if many individuals agree on the best outcome, then if any individual receives this outcome, they all must receive it, regardless of each individual's underlying qualifications for the outcome. We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness. At a high-level, PIIF requires that outcomes satisfy IF-style constraints, but allows for deviations provided they are in line with individuals' preferences. We show that PIIF can permit outcomes that are more favorable to individuals than any IF solution, while providing considerably more flexibility to the decision-maker than EF. In addition, we show how to efficiently optimize any convex objective over the outcomes subject to PIIF for a rich class of individual preferences. Finally, we demonstrate the broad applicability of the PIIF framework by extending our definitions and algorithms to the multiple-task targeted advertising setting introduced by Dwork and Ilvento [ITCS 2019].

[1]  Christopher Jung,et al.  Eliciting and Enforcing Subjective Individual Fairness , 2019, ArXiv.

[2]  E. Rowland Theory of Games and Economic Behavior , 1946, Nature.

[3]  Michael Carl Tschantz,et al.  Exploring User Perceptions of Discrimination in Online Targeted Advertising , 2017, USENIX Security Symposium.

[4]  J. Dewland Reuters , 2009 .

[5]  Cynthia Dwork,et al.  Fairness Under Composition , 2018, ITCS.

[6]  冯利芳 Facebook , 2020, The SAGE International Encyclopedia of Mass Media and Society.

[7]  Seth Neel,et al.  Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness , 2017, ICML.

[8]  René M. Stulz,et al.  Author ' s personal copy Bank CEO incentives and the credit crisis $ , 2010 .

[9]  児玉 文雄 Harvard Business Review : 抄録雑誌の概要 , 1987 .

[10]  VIRGINIA LAW REVIEW , 2009 .

[11]  Douglas A. Blackmon THE WALL STREET JOURNAL サプライチェーンで飛躍狙うフェデックス--シスコの出荷業務を全面引き受けへ--新サービスで企業の倉庫なくす , 1999 .

[12]  Guy N. Rothblum,et al.  Calibration for the (Computationally-Identifiable) Masses , 2017, ArXiv.

[13]  Daphne Koller,et al.  Learning an Agent's Utility Function by Observing Behavior , 2001, ICML.

[14]  Shafi Goldwasser,et al.  Proceedings of the 3rd Innovations in Theoretical Computer Science Conference , 2012 .

[15]  Tony Doyle,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017, Inf. Soc..

[16]  Deborah Hellman,et al.  Measuring Algorithmic Fairness , 2019 .

[17]  Josef Hadar,et al.  Rules for Ordering Uncertain Prospects , 1969 .

[18]  Christopher Jung,et al.  Online Learning with an Unknown Fairness Metric , 2018, NeurIPS.

[19]  Piotr Sapiezynski,et al.  Discrimination through Optimization , 2019, Proc. ACM Hum. Comput. Interact..

[20]  F. Su Rental Harmony: Sperner's Lemma in Fair Division , 1999 .

[21]  Maria-Florina Balcan,et al.  Envy-Free Classification , 2018, NeurIPS.

[22]  Latanya Sweeney,et al.  Discrimination in online ad delivery , 2013, CACM.

[23]  Michael I. Jordan,et al.  Advances in Neural Information Processing Systems 30 , 1995 .

[24]  R. Lathe Phd by thesis , 1988, Nature.

[25]  Aaron Roth,et al.  Average Individual Fairness: Algorithms, Generalization and Experiments , 2019, NeurIPS.

[26]  Harris Mateen Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2018 .

[27]  Jack M. Robertson,et al.  Cake-cutting algorithms - be fair if you can , 1998 .

[28]  Deborah Hellman Two Concepts of Discrimination , 2015 .

[29]  Guy N. Rothblum,et al.  Multicalibration: Calibration for the (Computationally-Identifiable) Masses , 2018, ICML.

[30]  G. DeFriese,et al.  The New York Times , 2020, Publishing for Libraries.

[31]  Guy N. Rothblum,et al.  Probably Approximately Metric-Fair Learning , 2018, ICML.

[32]  Catherine Tucker,et al.  Algorithmic bias? An empirical study into apparent gender-based discrimination in the display of STEM career ads , 2019 .

[33]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[34]  Meena Jagadeesan,et al.  Multi-category fairness in sponsored search auctions , 2019, FAT*.

[35]  A. Korolova,et al.  Discrimination through Optimization , 2019, Proc. ACM Hum. Comput. Interact..

[36]  D. Foley Resource allocation and the public sector , 1967 .

[37]  Michael Carl Tschantz,et al.  Automated Experiments on Ad Privacy Settings , 2014, Proc. Priv. Enhancing Technol..

[38]  Simone Muench,et al.  Queue , 2020 .

[39]  V. Bawa OPTIMAL, RULES FOR ORDERING UNCERTAIN PROSPECTS+ , 1975 .

[40]  Guy N. Rothblum,et al.  Fairness Through Computationally-Bounded Awareness , 2018, NeurIPS.

[41]  Krishna P. Gummadi,et al.  From Parity to Preference-based Notions of Fairness in Classification , 2017, NIPS.

[42]  Shuchi Chawla,et al.  Individual Fairness in Sponsored Search Auctions , 2019, ArXiv.

[43]  F. A. Hayek The American Economic Review , 2007 .