Counterfactual Representations for Intersectional Fair Ranking in Recruitment

Fairness interventions require access to sensitive attributes of candidates applying for a job, which might not be available due to limitations imposed by data protection laws. In this work we propose using a pre-processing technique to create counterfactual representations of the candidates that lead to a more diverse ranking with respect to intersectional groups. To be compliant with data protection laws we propose to train a model on the fairer representations and apply the model at inference time without having access to the sensitive attributes of the candidates. In experiments on the BIOS dataset, we find this approach can improve the diversity of recommendations at top-ranked positions without harming performance.

[1]  F. Z. Borgesius,et al.  Using sensitive data to prevent discrimination by artificial intelligence: Does the GDPR need a new exception? , 2022, Comput. Law Secur. Rev..

[2]  Julia Stoyanovich,et al.  Fairness in Ranking, Part I: Score-Based Ranking , 2022, ACM Comput. Surv..

[3]  Georgia Koutrika,et al.  Fairness in rankings and recommendations: an overview , 2021, The VLDB Journal.

[4]  B. ter Weel,et al.  Ethnic Employment Gaps of Graduates in the Netherlands , 2020, De Economist.

[5]  Julia Stoyanovich,et al.  Causal intersectionality for fair ranking , 2020, ArXiv.

[6]  Bram Lancee,et al.  Discrimination against Turkish minorities in Germany and the Netherlands: field experimental evidence on the effect of diagnostic information on labour market outcomes , 2019, Journal of Ethnic and Migration Studies.

[7]  Alan W Black,et al.  Measuring Bias in Contextualized Word Representations , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.

[8]  Julia Stoyanovich,et al.  Balanced Ranking with Diversity Constraints , 2019, IJCAI.

[9]  Ed H. Chi,et al.  Fairness in Recommendation Ranking through Pairwise Comparisons , 2019, KDD.

[10]  Alexandra Chouldechova,et al.  Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting , 2019, FAT.

[11]  A. Solaz,et al.  Part-time employment, the gender wage gap and the role of wage-setting institutions: Evidence from 11 European countries , 2018 .

[12]  Krishna P. Gummadi,et al.  iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making , 2018, 2019 IEEE 35th International Conference on Data Engineering (ICDE).

[13]  Ivan Kitanovski,et al.  FairSearch: A Tool For Fairness in Ranked Search Results , 2018, WWW.

[14]  J. Murphy The General Data Protection Regulation (GDPR) , 2018, Irish medical journal.

[15]  Thorsten Joachims,et al.  Fairness of Exposure in Rankings , 2018, KDD.

[16]  Ricardo Baeza-Yates,et al.  FA*IR: A Fair Top-k Ranking Algorithm , 2017, CIKM.

[17]  Qingzhao Yu,et al.  mma: An R Package for Mediation Analysis with Multiple Mediators , 2017 .

[18]  Adam Tauman Kalai,et al.  Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.

[19]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[20]  Jeffrey Dean,et al.  Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.

[21]  Tie-Yan Liu,et al.  Learning to rank: from pairwise approach to listwise approach , 2007, ICML '07.

[22]  Gregory N. Hullender,et al.  Learning to rank using gradient descent , 2005, ICML.

[23]  Thorsten Joachims,et al.  Eye-tracking analysis of user behavior in WWW search , 2004, SIGIR '04.

[24]  Part-time Employment , 1972, Nature.

[25]  Cyrille Schwellnus,et al.  Sticky floors or glass ceilings? The role of human capital, working time flexibility and discrimination in the gender wage gap , 2021 .

[26]  A. Korolova,et al.  Discrimination through Optimization , 2019, Proc. ACM Hum. Comput. Interact..

[27]  A. Panayiotou,et al.  Gender segregation in the labour market : root causes, implications and policy responses in the EU , 2009 .

[28]  Mark T. Keane,et al.  Modeling Result-List Searching in the World Wide Web: The Role of Relevance Topologies and Trust Bias , 2006 .