Fairness Perception from a Network-Centric Perspective

Algorithmic fairness is a major concern in recent years as the influence of machine learning algorithms becomes more widespread. In this paper, we investigate the issue of algorithmic fairness from a network-centric perspective. Specifically, we introduce a novel yet intuitive function known as fairness perception and provide an axiomatic approach to analyze its properties. Using a peer-review network as a case study, we also examine its utility in terms of assessing the perception of fairness in paper acceptance decisions. We show how the function can be extended to a group fairness metric known as fairness visibility and demonstrate its relationship to demographic parity. We also discuss a potential pitfall of the fairness visibility measure that can be exploited to mislead individuals into perceiving that the algorithmic decisions are fair. We demonstrate how the problem can be alleviated by increasing the local neighborhood size of the fairness perception function.

[1]  J. S. Adams,et al.  Inequity In Social Exchange , 1965 .

[2]  Toon Calders,et al.  Building Classifiers with Independency Constraints , 2009, 2009 IEEE International Conference on Data Mining Workshops.

[3]  Ikutaro Enatsu,et al.  What is Perceived Fairness , 2010 .

[4]  Jun Sakuma,et al.  Fairness-aware Learning through Regularization Approach , 2011, 2011 IEEE 11th International Conference on Data Mining Workshops.

[5]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[6]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[7]  Michael Carl Tschantz,et al.  Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination , 2014, ArXiv.

[8]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[9]  Amos J. Storkey,et al.  Censoring Representations with an Adversary , 2015, ICLR.

[10]  Aaron Roth,et al.  Fairness in Learning: Classic and Contextual Bandits , 2016, NIPS.

[11]  Max Welling,et al.  The Variational Fair Autoencoder , 2015, ICLR.

[12]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[13]  Matt J. Kusner,et al.  Counterfactual Fairness , 2017, NIPS.

[14]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[15]  Avi Feller,et al.  Algorithmic Decision Making and the Cost of Fairness , 2017, KDD.

[16]  Min Zhang,et al.  Reviewer bias in single- versus double-blind peer review , 2017, Proceedings of the National Academy of Sciences.

[17]  Min Kyung Lee Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division , 2017, CSCW.

[18]  Seth Neel,et al.  Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness , 2017, ICML.

[19]  Allison Woodruff,et al.  A Qualitative Exploration of Perceptions of Algorithmic Fairness , 2018, CHI.

[20]  B. Jefferson Predictable Policing: Predictive Crime Mapping and Geographies of Policing and Race , 2018 .

[21]  L. V. Shulga,et al.  Measuring Perceptions of Fairness of Loyalty Program Members , 2018 .

[22]  P. Jeffrey Brantingham,et al.  Does Predictive Policing Lead to Biased Arrests? Results From a Randomized Controlled Trial , 2018 .

[23]  M. Kearns,et al.  Fairness in Criminal Justice Risk Assessments: The State of the Art , 2017, Sociological Methods & Research.

[24]  Toniann Pitassi,et al.  Learning Adversarially Fair and Transferable Representations , 2018, ICML.

[25]  Kristian Lum,et al.  An algorithm for removing sensitive information: Application to race-independent recidivism prediction , 2017, The Annals of Applied Statistics.

[26]  Nihar B. Shah,et al.  PeerReview4All: Fair and Accurate Reviewer Assignment in Peer Review , 2018, ALT.

[27]  Pang-Ning Tan,et al.  Bursting the Filter Bubble: Fairness-Aware Network Link Prediction , 2020, AAAI.

[28]  Suresh Venkatasubramanian,et al.  On the (im)possibility of fairness , 2016, ArXiv.