Understanding and Mitigating the Effect of Outliers in Fair Ranking

Traditional ranking systems are expected to sort items in the order of their relevance and thereby maximize their utility. In fair ranking, utility is complemented with fairness as an optimization goal. Recent work on fair ranking focuses on developing algorithms to optimize for fairness, given position-based exposure. In contrast, we identify the potential of outliers in a ranking to influence exposure and thereby negatively impact fairness. An outlier in a list of items can alter the examination probabilities, which can lead to different distributions of attention, compared to position-based exposure. We formalize outlierness in a ranking, show that outliers are present in realistic datasets, and present the results of an eye-tracking study, showing that users’ scanning order and the exposure of items are influenced by the presence of outliers. We then introduce OMIT, a method for fair ranking in the presence of outliers. Given an outlier detection method, OMIT improves fair allocation of exposure by suppressing outliers in the top-k ranking. Using an academic search dataset, we show that outlierness optimization leads to a fairer policy that displays fewer outliers in the top-k , while maintaining a reasonable trade-off between fairness and utility.

[1]  Bamshad Mobasher,et al.  Controlling Popularity Bias in Learning-to-Rank Recommendation , 2017, RecSys.

[2]  Sahin Cem Geyik,et al.  Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search , 2019, KDD.

[3]  Yisong Yue,et al.  Beyond position bias: examining result attractiveness as a source of presentation bias in clickthrough data , 2010, WWW '10.

[4]  Ricardo Baeza-Yates,et al.  FA*IR: A Fair Top-k Ranking Algorithm , 2017, CIKM.

[5]  Thorsten Joachims,et al.  Accurately interpreting clickthrough data as implicit feedback , 2005, SIGIR '05.

[6]  Yifan Zhang,et al.  Correcting for Selection Bias in Learning-to-rank Systems , 2020, WWW.

[7]  Radoslaw Bialobrzeski,et al.  Context-Aware Learning to Rank with Self-Attention , 2020, ArXiv.

[8]  Piotr Sapiezynski,et al.  Quantifying the Impact of User Attentionon Fair Group Representation in Ranked Lists , 2019, WWW.

[9]  Mark T. Keane,et al.  Modeling Result-List Searching in the World Wide Web: The Role of Relevance Topologies and Trust Bias , 2006 .

[10]  Tie-Yan Liu,et al.  Learning to rank: from pairwise approach to listwise approach , 2007, ICML '07.

[11]  H. V. Jagadish,et al.  Online Set Selection with Fairness and Diversity Constraints , 2018, EDBT.

[12]  Julia Stoyanovich,et al.  Fairness in Ranking: A Survey , 2021, ArXiv.

[13]  Fernando Diaz,et al.  Whole page optimization: how page elements interact with the position auction , 2014, EC.

[14]  Nisheeth K. Vishnoi,et al.  Interventions for ranking in the presence of implicit bias , 2020, FAT*.

[15]  Abbe Mowshowitz,et al.  Bias on the web , 2002, CACM.

[16]  Christopher D. Manning,et al.  Introduction to Information Retrieval , 2010, J. Assoc. Inf. Sci. Technol..

[17]  M. de Rijke,et al.  When Inverse Propensity Scoring does not Work: Affine Corrections for Unbiased Learning to Rank , 2020, CIKM.

[18]  G. Ritter,et al.  Lattice Theory , 2021, Introduction to Lattice Algebra.

[19]  W. Bruce Croft,et al.  Correcting for Recency Bias in Job Recommendation , 2019, CIKM.

[20]  Nisheeth K. Vishnoi,et al.  Ranking with Fairness Constraints , 2017, ICALP.

[21]  Marc Najork,et al.  Learning to Rank with Selection Bias in Personal Search , 2016, SIGIR.

[22]  Anthony K. H. Tung,et al.  Ranking Outliers Using Symmetric Neighborhood Relationship , 2006, PAKDD.

[23]  Thorsten Joachims,et al.  User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided Markets , 2020, ICTIR.

[24]  Thorsten Joachims,et al.  Policy-Gradient Training of Fair and Unbiased Ranking Functions , 2019, SIGIR.

[25]  Krishna P. Gummadi,et al.  Equity of Attention: Amortizing Individual Fairness in Rankings , 2018, SIGIR.

[26]  Julia Stoyanovich,et al.  Measuring Fairness in Ranked Outputs , 2016, SSDBM.

[27]  Susann Fiedler,et al.  Guideline for Reporting Standards of Eye-tracking Research in Decision Sciences , 2020 .

[28]  Thorsten Joachims,et al.  Unbiased Learning-to-Rank with Biased Feedback , 2016, WSDM.

[29]  Maarten de Rijke,et al.  Unifying Online and Counterfactual Learning to Rank , 2020, ArXiv.

[30]  Cezar Ionescu,et al.  COPOD: Copula-Based Outlier Detection , 2020, 2020 IEEE International Conference on Data Mining (ICDM).

[31]  Teri A. Crosby,et al.  How to Detect and Handle Outliers , 1993 .

[32]  Thorsten Joachims,et al.  Policy Learning for Fairness in Ranking , 2019, NeurIPS.

[33]  Thorsten Joachims,et al.  Controlling Fairness and Bias in Dynamic Learning-to-Rank , 2020, SIGIR.

[34]  Marvin Marcus,et al.  DIAGONALS OF DOUBLY STOCHASTIC MATRICES , 1959 .

[35]  Michael Bendersky,et al.  Addressing Trust Bias for Unbiased Learning-to-Rank , 2019, WWW.

[36]  Bhaskar Mitra,et al.  Evaluating Stochastic Rankings with Expected Exposure , 2020, CIKM.

[37]  M. de Rijke,et al.  Click Models for Web Search , 2015, Click Models for Web Search.

[38]  Abolfazl Asudeh,et al.  Designing Fair Ranking Schemes , 2017, SIGMOD Conference.

[39]  Thorsten Joachims,et al.  Fairness of Exposure in Rankings , 2018, KDD.

[40]  M. de Rijke,et al.  A Neural Click Model for Web Search , 2016, WWW.

[41]  Clara Pizzuti,et al.  Fast Outlier Detection in High Dimensional Spaces , 2002, PKDD.

[42]  Fernando Diaz,et al.  Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommendation Systems , 2018, CIKM.