Multi-grouping Robust Fair Ranking
暂无分享,去创建一个
Rankings are at the core of countless modern applications and thus play a major role in various decision making scenarios. When such rankings are produced by data-informed, machine learning-based algorithms, the potentially harmful biases contained in the data and algorithms are likely to be reproduced and even exacerbated. This motivated recent research to investigate a methodology for fair ranking, as a way to correct the aforementioned biases. Current approaches to fair ranking consider that the protected groups, i.e., the partition of the population potentially impacted by the biases, are known. However, in a realistic scenario, this assumption might not hold as different biases may lead to different partitioning into protected groups. Only accounting for one such partition (i.e., grouping) would still lead to potential unfairness with respect to the other possible groupings. Therefore, in this paper, we study the problem of designing fair ranking algorithms without knowing in advance the groupings that will be used later to assess their fairness. The approach that we follow is to rely on a carefully chosen set of groupings when deriving the ranked lists, and we empirically investigate which selection strategies are the most effective. An efficient two-step greedy brute-force method is also proposed to embed our strategy. As benchmark for this study, we adopted the dataset and setting composing the TREC 2019 Fair Ranking track.
[1] Krishna P. Gummadi,et al. Equity of Attention: Amortizing Individual Fairness in Rankings , 2018, SIGIR.
[2] Ricardo Baeza-Yates,et al. FA*IR: A Fair Top-k Ranking Algorithm , 2017, CIKM.
[3] Thorsten Joachims,et al. Policy Learning for Fairness in Ranking , 2019, NeurIPS.