Towards Personalized Fairness based on Causal Notion

Recommender systems are gaining increasing and critical impacts on human and society since a growing number of users use them for information seeking and decision making. Therefore, it is crucial to address the potential unfairness problems in recommendations. Just like users have personalized preferences on items, users' demands for fairness are also personalized in many scenarios. Therefore, it is important to providepersonalized fair recommendations for users to satisfy theirpersonalized fairness demands. Besides, previous works on fair recommendation mainly focus on association-based fairness. However, it is important to advance from associative fairness notions to causal fairness notions for assessing fairness more properly in recommender systems. Based on the above considerations, this paper focuses on achieving personalized counterfactual fairness for users in recommender systems. To this end, we introduce a framework for achieving counterfactually fair recommendations through adversary learning by generating feature-independent user embeddings for recommendation. The framework allows recommender systems to achieve personalized fairness for users while also covering non-personalized situations. Experiments on two real-world datasets with shallow and deep recommendation algorithms show that our method can generate fairer recommendations for users with a desirable recommendation performance.

[1]  Xu Chen,et al.  Explainable Recommendation: A Survey and New Perspectives , 2018, Found. Trends Inf. Retr..

[2]  Pasquale Lops,et al.  Content-based Recommender Systems: State of the Art and Trends , 2011, Recommender Systems Handbook.

[3]  Junhua Chen,et al.  Revisiting Alternative Experimental Settings for Evaluating Top-N Item Recommendation Algorithms , 2020, CIKM.

[4]  Franco Turini,et al.  Discrimination-aware data mining , 2008, KDD.

[5]  Elias Bareinboim,et al.  Fairness in Decision-Making - The Causal Explanation Formula , 2018, AAAI.

[6]  Vasant Honavar,et al.  Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality , 2019, WWW.

[7]  Heng-Tze Cheng,et al.  Wide & Deep Learning for Recommender Systems , 2016, DLRS@RecSys.

[8]  Robin Burke,et al.  Multi-stakeholder Recommendation and its Connection to Multi-sided Fairness , 2019, RMSE@RecSys.

[9]  Gediminas Adomavicius,et al.  Improving Aggregate Recommendation Diversity Using Ranking-Based Techniques , 2012, IEEE Transactions on Knowledge and Data Engineering.

[10]  Ce Zhang,et al.  Adversarial Learning for Debiasing Knowledge Graph Embeddings , 2020, ArXiv.

[11]  Nisheeth K. Vishnoi,et al.  Ranking with Fairness Constraints , 2017, ICALP.

[12]  Yehuda Koren,et al.  Matrix Factorization Techniques for Recommender Systems , 2009, Computer.

[13]  Lars Schmidt-Thieme,et al.  BPR: Bayesian Personalized Ranking from Implicit Feedback , 2009, UAI.

[14]  Bamshad Mobasher,et al.  Managing Popularity Bias in Recommender Systems with Personalized Re-ranking , 2019, FLAIRS.

[15]  Aditya Krishna Menon,et al.  The cost of fairness in binary classification , 2018, FAT.

[16]  Douglas B. Terry,et al.  Using collaborative filtering to weave an information tapestry , 1992, CACM.

[17]  Jun Sakuma,et al.  Fairness-Aware Classifier with Prejudice Remover Regularizer , 2012, ECML/PKDD.

[18]  Liu Yiqun,et al.  Fairness-Aware Group Recommendation with Pareto-Efficiency , 2017, RecSys 2017.

[19]  Bernhard Schölkopf,et al.  Avoiding Discrimination through Causal Reasoning , 2017, NIPS.

[20]  Matt J. Kusner,et al.  Counterfactual Fairness , 2017, NIPS.

[21]  Krishna P. Gummadi,et al.  Equity of Attention: Amortizing Individual Fairness in Rankings , 2018, SIGIR.

[22]  Elias Bareinboim,et al.  Equality of Opportunity in Classification: A Causal Approach , 2018, NeurIPS.

[23]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[24]  William L. Hamilton,et al.  Compositional Fairness Constraints for Graph Embeddings , 2019, ICML.

[25]  Robin D. Burke,et al.  Multisided Fairness for Recommendation , 2017, ArXiv.

[26]  Karthik Sridharan,et al.  Two-Player Games for Efficient Non-Convex Constrained Optimization , 2018, ALT.

[27]  Yiqun Liu,et al.  Fairness-Aware Group Recommendation with Pareto-Efficiency , 2017, RecSys.

[28]  Andrew D. Selbst,et al.  Big Data's Disparate Impact , 2016 .

[29]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[30]  Krishna P. Gummadi,et al.  FairRec: Two-Sided Fairness for Personalized Recommendations in Two-Sided Platforms , 2020, WWW.

[31]  Thorsten Joachims,et al.  Policy Learning for Fairness in Ranking , 2019, NeurIPS.

[32]  Hanghang Tong,et al.  PC-Fairness: A Unified Framework for Measuring Causality-based Fairness , 2019, NeurIPS.

[33]  Min Zhang,et al.  Neural Logic Reasoning , 2020, CIKM.

[34]  Ed H. Chi,et al.  Fairness in Recommendation Ranking through Pairwise Comparisons , 2019, KDD.

[35]  Xia Hu,et al.  Fairness in Deep Learning: A Computational Perspective , 2019, IEEE Intelligent Systems.

[36]  Gediminas Adomavicius,et al.  Context-aware recommender systems , 2008, RecSys '08.

[37]  Bert Huang,et al.  Beyond Parity: Fairness Objectives for Collaborative Filtering , 2017, NIPS.

[38]  Yongfeng Zhang,et al.  Towards Long-term Fairness in Recommendation , 2021, WSDM.

[39]  Changho Suh,et al.  A Fair Classifier Using Mutual Information , 2020, 2020 IEEE International Symposium on Information Theory (ISIT).

[40]  Thorsten Joachims,et al.  Fairness of Exposure in Rankings , 2018, KDD.

[41]  John Langford,et al.  A Reductions Approach to Fair Classification , 2018, ICML.

[42]  Yunqi Li,et al.  User-oriented Fairness in Recommendation , 2021, WWW.

[43]  Illtyd Trethowan Causality , 1938 .

[44]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[45]  Harikrishna Narasimhan,et al.  Learning with Complex Loss Functions and Constraints , 2018, AISTATS.

[46]  Harikrishna Narasimhan,et al.  Pairwise Fairness for Ranking and Regression , 2019, AAAI.

[47]  Robin D. Burke,et al.  Hybrid Recommender Systems: Survey and Experiments , 2002, User Modeling and User-Adapted Interaction.

[48]  Xintao Wu,et al.  Fairness through Equality of Effort , 2019, WWW.

[49]  Nathan Srebro,et al.  Learning Non-Discriminatory Predictors , 2017, COLT.

[50]  Alfred Kobsa,et al.  User Modeling and User-Adapted Interaction , 1994, User Modeling and User-Adapted Interaction.

[51]  Carlos Castillo,et al.  Reducing Disparate Exposure in Ranking: A Learning To Rank Approach , 2018, WWW.

[52]  Catuscia Palamidessi,et al.  Survey on Causal-based Machine Learning Fairness Notions , 2020, ArXiv.

[53]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[54]  Guokun Lai,et al.  Explicit factor models for explainable recommendation based on phrase-level sentiment analysis , 2014, SIGIR.

[55]  J. Pearl,et al.  Causal Inference in Statistics: A Primer , 2016 .

[56]  Fernando Diaz,et al.  Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommendation Systems , 2018, CIKM.

[57]  Shujian Huang,et al.  Deep Matrix Factorization Models for Recommender Systems , 2017, IJCAI.

[58]  Ruslan Salakhutdinov,et al.  Probabilistic Matrix Factorization , 2007, NIPS.

[59]  Franco Turini,et al.  Measuring Discrimination in Socially-Sensitive Decision Records , 2009, SDM.

[60]  Jun Sakuma,et al.  Correcting Popularity Bias by Enhancing Recommendation Neutrality , 2014, RecSys Posters.

[61]  Mehrbakhsh Nilashi,et al.  Collaborative filtering recommender systems , 2013 .

[62]  Bamshad Mobasher,et al.  Controlling Popularity Bias in Learning-to-Rank Recommendation , 2017, RecSys.

[63]  Maya R. Gupta,et al.  Satisfying Real-world Goals with Dataset Constraints , 2016, NIPS.

[64]  Graham Neubig,et al.  Controllable Invariance through Adversarial Feature Learning , 2017, NIPS.

[65]  Megha Khosla,et al.  User Fairness in Recommender Systems , 2018, WWW.

[66]  Yoav Goldberg,et al.  Adversarial Removal of Demographic Attributes from Text Data , 2018, EMNLP.

[67]  Jieyu Zhao,et al.  Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[68]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[69]  Shuyuan Xu,et al.  Fairness-Aware Explainable Recommendation over Knowledge Graphs , 2020, SIGIR.

[70]  Zhe Zhao,et al.  Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations , 2017, ArXiv.

[71]  Judea Pearl,et al.  Direct and Indirect Effects , 2001, UAI.

[72]  Xu Chen,et al.  Joint Representation Learning for Top-N Recommendation with Heterogeneous Information Sources , 2017, CIKM.