Localized Fairness in Recommender Systems

Recent research in fairness in machine learning has identified situations in which biases in input data can cause harmful or unwanted effects. Researchers in the areas of personalization and recommendation have begun to study similar types of bias. What these lines of research share is a fixed representation of the protected groups relative to which bias must be monitored. However, in some real-world application contexts, such groups cannot be defined apriori, but must be derived from the data itself. Furthermore, as we show, it may be insufficient in such cases to examine global system properties to identify protected groups. Thus, we demonstrate that fairness may be local, and the identification of protected groups only possible through consideration of local conditions.

[1]  James Caverlee,et al.  Fairness-Aware Tensor-Based Recommendation , 2018, CIKM.

[2]  Gediminas Adomavicius,et al.  Context-aware recommender systems , 2008, RecSys '08.

[3]  Seth Neel,et al.  Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness , 2017, ICML.

[4]  Nasim Sonboli,et al.  Balanced Neighborhoods for Multi-sided Fairness in Recommendation , 2018, FAT.

[5]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[6]  Seth Neel,et al.  An Empirical Study of Rich Subgroup Fairness for Machine Learning , 2018, FAT.

[7]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[8]  Toon Calders,et al.  Data preprocessing techniques for classification without discrimination , 2011, Knowledge and Information Systems.

[9]  Robin D. Burke,et al.  Multistakeholder recommendation with provider constraints , 2018, RecSys.

[10]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[11]  Jun Sakuma,et al.  Enhancement of the Neutrality in Recommendation , 2012, Decisions@RecSys.

[12]  Suresh Venkatasubramanian,et al.  On the (im)possibility of fairness , 2016, ArXiv.

[13]  Jon M. Kleinberg,et al.  Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.

[14]  Josep Domingo-Ferrer,et al.  A Methodology for Direct and Indirect Discrimination Prevention in Data Mining , 2013, IEEE Transactions on Knowledge and Data Engineering.

[15]  Weiwen Liu,et al.  Personalizing Fairness-aware Re-ranking , 2018, ArXiv.

[16]  Bert Huang,et al.  Beyond Parity: Fairness Objectives for Collaborative Filtering , 2017, NIPS.