Equity of Attention: Amortizing Individual Fairness in Rankings

Rankings of people and items are at the heart of selection-making, match-making, and recommender systems, ranging from employment sites to sharing economy platforms. As ranking positions influence the amount of attention the ranked subjects receive, biases in rankings can lead to unfair distribution of opportunities and resources such as jobs or income. This paper proposes new measures and mechanisms to quantify and mitigate unfairness from a bias inherent to all rankings, namely, the position bias which leads to disproportionately less attention being paid to low-ranked subjects. Our approach differs from recent fair ranking approaches in two important ways. First, existing works measure unfairness at the level of subject groups while our measures capture unfairness at the level of individual subjects, and as such subsume group unfairness. Second, as no single ranking can achieve individual attention fairness, we propose a novel mechanism that achieves amortized fairness, where attention accumulated across a series of rankings is proportional to accumulated relevance. We formulate the challenge of achieving amortized individual fairness subject to constraints on ranking quality as an online optimization problem and show that it can be solved as an integer linear program. Our experimental evaluation reveals that unfair attention distribution in rankings can be substantial, and demonstrates that our method can improve individual fairness while retaining high ranking quality.

[1]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[2]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[3]  Thorsten Joachims,et al.  Unbiased Learning-to-Rank with Biased Feedback , 2016, WSDM.

[4]  Amy R. Ward,et al.  Fair Dynamic Routing in Large-Scale Heterogeneous-Server Systems , 2010, Oper. Res..

[5]  G. W. Walster,et al.  New directions in equity research. , 1973 .

[6]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[7]  Gerhard Weikum,et al.  Privacy through Solidarity: A User-Utility-Preserving Framework to Counter Profiling , 2017, SIGIR.

[8]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[9]  Filip Radlinski,et al.  Search Engines that Learn from Implicit Feedback , 2007, Computer.

[10]  Solon Barocas,et al.  Designing Against Discrimination in Online Markets , 2017 .

[11]  Jun Sakuma,et al.  Fairness-Aware Classifier with Prejudice Remover Regularizer , 2012, ECML/PKDD.

[12]  J. Greenberg,et al.  A Taxonomy of Organizational Justice Theories , 1987 .

[13]  Jure Leskovec,et al.  Human Decisions and Machine Predictions , 2017, The quarterly journal of economics.

[14]  Nisheeth K. Vishnoi,et al.  Ranking with Fairness Constraints , 2017, ICALP.

[15]  Salvatore Ruggieri,et al.  A multidisciplinary survey on discrimination analysis , 2013, The Knowledge Engineering Review.

[16]  Chao Liu,et al.  Efficient multiple-click models in web search , 2009, WSDM '09.

[17]  M. de Rijke,et al.  Click Models for Web Search , 2015, Click Models for Web Search.

[18]  Jon M. Kleinberg,et al.  Fair Division via Social Comparison , 2016, AAMAS.

[19]  Ricardo Baeza-Yates,et al.  FA*IR: A Fair Top-k Ranking Algorithm , 2017, CIKM.

[20]  Krishna P. Gummadi,et al.  Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning , 2018, AAAI.

[21]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[22]  Ryan Calo,et al.  The Taking Economy: Uber, Information, and Power , 2017 .

[23]  Aaron Roth,et al.  Meritocratic Fairness for Cross-Population Selection , 2017, ICML.

[24]  Benjamin Hindman,et al.  Dominant Resource Fairness: Fair Allocation of Multiple Resource Types , 2011, NSDI.

[25]  Fernando Diaz,et al.  Auditing Search Engines for Differential Satisfaction Across Demographics , 2017, WWW.

[26]  Vyas Sekar,et al.  Multi-resource fair queueing for packet processing , 2012, CCRV.

[27]  Suresh Venkatasubramanian,et al.  Auditing black-box models for indirect influence , 2016, Knowledge and Information Systems.

[28]  Franco Turini,et al.  Discrimination-aware data mining , 2008, KDD.

[29]  M. Yaari,et al.  On dividing justly , 1984 .

[30]  Benjamin Piwowarski,et al.  A user browsing model to predict search engine click data from past observations. , 2008, SIGIR '08.

[31]  Krishna P. Gummadi,et al.  Fair Sharing for Sharing Economy Platforms , 2017 .

[32]  Marc Najork,et al.  Learning to Rank with Selection Bias in Personal Search , 2016, SIGIR.

[33]  Thorsten Joachims,et al.  Fairness of Exposure in Rankings , 2018, KDD.

[34]  Nick Craswell,et al.  An experimental comparison of click position-bias models , 2008, WSDM '08.

[35]  Julia Stoyanovich,et al.  Measuring Fairness in Ranked Outputs , 2016, SSDBM.