I agree with the decision, but they didn't deserve this: Future Developers' Perception of Fairness in Algorithmic Decisions

While professionals are increasingly relying on algorithmic systems for making a decision, on some occasions, algorithmic decisions may be perceived as biased or not just. Prior work has looked into the perception of algorithmic decision-making from the user's point of view. In this work, we investigate how students in fields adjacent to algorithm development perceive algorithmic decisionmaking. Participants (N=99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making in three separate scenarios. Two of the three scenarios were independent of each other, while the third scenario presented three different outcomes of the same algorithmic system, demonstrating perception changes triggered by different outputs. Quantitative analysis indicates that a) 'agreeing' with a decision does not mean the person 'deserves the outcome', b) perceiving the factors used in the decision-making as 'appropriate' does not make the decision of the system 'fair' and c) perceiving a system's decision as 'not fair' is affecting the participants' 'trust' in the system. In addition, participants found proportional distribution of benefits more fair than other approaches. Qualitative analysis provides further insights into that information the participants find essential to judge and understand an algorithmic decision-making system's fairness. Finally, the level of academic education has a role to play in the perception of fairness and justice in algorithmic decision-making.

[1]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[2]  Leif Hancox-Li,et al.  Robustness in machine learning explanations: does it matter? , 2020, FAT*.

[3]  Ryan Shaun Joazeiro de Baker,et al.  Detecting Student Emotions in Computer-Enabled Classrooms , 2016, IJCAI.

[4]  Miroslav Dudík,et al.  Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? , 2018, CHI.

[5]  Bo Cowgill,et al.  Bias and Productivity in Humans and Algorithms: Theory and Evidence from Résumé Screening , 2018 .

[6]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[7]  A. Chouldechova,et al.  Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services , 2019, CHI.

[8]  Franco Turini,et al.  A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..

[9]  Rebecca Gray,et al.  Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed , 2015, CHI.

[10]  Krishna P. Gummadi,et al.  iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making , 2018, 2019 IEEE 35th International Conference on Data Engineering (ICDE).

[11]  Reuben Binns,et al.  On the apparent conflict between individual and group fairness , 2019, FAT*.

[12]  David R. Thomas,et al.  A General Inductive Approach for Analyzing Qualitative Evaluation Data , 2006 .

[13]  David C. Parkes,et al.  How Do Fairness Definitions Fare?: Examining Public Attitudes Towards Algorithmic Definitions of Fairness , 2018, AIES.

[14]  Samuel B. Williams,et al.  ASSOCIATION FOR COMPUTING MACHINERY , 2000 .

[15]  Krishna P. Gummadi,et al.  Operationalizing Individual Fairness with Pairwise Fair Representations , 2019, Proc. VLDB Endow..

[16]  Ben Green,et al.  The Myth in the Methodology: Towards a Recontextualization of Fairness in Machine Learning , 2018, ICML 2018.

[17]  Blase Ur,et al.  An empirical study on the perceived fairness of realistic, imperfect machine learning models , 2020, FAT*.

[18]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[19]  Min Kyung Lee Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management , 2018, Big Data Soc..

[20]  Paul Baker,et al.  ‘Why do white people have thin lips?’ Google and the perpetuation of stereotypes via auto-complete search forms , 2013 .

[21]  A. Wang Procedural Justice and Risk-Assessment Algorithms , 2018 .

[22]  Nihar R. Mahapatra,et al.  Ethical Considerations in AI-Based Recruitment , 2019, 2019 IEEE International Symposium on Technology and Society (ISTAS).

[23]  Berkeley J. Dietvorst,et al.  Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err , 2014, Journal of experimental psychology. General.

[24]  Michael Veale,et al.  Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making , 2018, CHI.

[25]  Jun Zhao,et al.  'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.

[26]  David Sontag,et al.  Why Is My Classifier Discriminatory? , 2018, NeurIPS.

[27]  Chankyung Pak,et al.  Algorithmic inference, political interest, and exposure to news and politics on Facebook , 2019, Information, Communication & Society.

[28]  Alexander Felfernig,et al.  Towards Social Choice-based Explanations in Group Recommender Systems , 2019, UMAP.

[29]  Ben Hutchinson,et al.  50 Years of Test (Un)fairness: Lessons for Machine Learning , 2018, FAT.

[30]  Kimon Kieslich,et al.  Implications of AI (un-)fairness in higher education admissions: the effects of perceived AI (un-)fairness on exit, voice and organizational reputation , 2020, FAT*.

[31]  E. Pierson Demographics and discussion influence views on algorithmic fairness , 2017 .

[32]  Emilee J. Rader,et al.  Explanations as Mechanisms for Supporting Algorithmic Transparency , 2018, CHI.

[33]  Paul Dourish,et al.  Accounting for system behavior: representation, reflection, and resourceful action , 1997 .

[34]  Allison Woodruff,et al.  A Qualitative Exploration of Perceptions of Algorithmic Fairness , 2018, CHI.

[35]  Eric Gilbert,et al.  User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms , 2019, CHI.

[36]  Nicolas Huck,et al.  Large data sets and machine learning: Applications to statistical arbitrage , 2019, Eur. J. Oper. Res..

[37]  M. I. V. Eale,et al.  SLAVE TO THE ALGORITHM ? WHY A ‘ RIGHT TO AN EXPLANATION ’ IS PROBABLY NOT THE REMEDY YOU ARE LOOKING FOR , 2017 .

[38]  Haiyi Zhu,et al.  Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences , 2020, CHI.

[39]  Min Kyung Lee Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division , 2017, CSCW.

[40]  Jahna Otterbacher,et al.  Emotion-based Stereotypes in Image Analysis Services , 2020, UMAP.

[41]  J. Colquitt,et al.  Measuring Justice and Fairness , 2015 .

[42]  Yang Liu,et al.  Calibrated Fairness in Bandits , 2017, ArXiv.

[43]  Andreas Krause,et al.  Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning , 2019, KDD.

[44]  Haipeng Shen,et al.  Artificial intelligence in healthcare: past, present and future , 2017, Stroke and Vascular Neurology.

[45]  Krishna P. Gummadi,et al.  Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction , 2018, WWW.

[46]  Paul D. Clough,et al.  Competent Men and Warm Women: Gender Stereotypes and Backlash in Image Search Results , 2017, CHI.

[47]  Nicholas Diakopoulos,et al.  Accountability in algorithmic decision making , 2016, Commun. ACM.

[48]  Jure Leskovec,et al.  Human Decisions and Machine Predictions , 2017, The quarterly journal of economics.