Human Comprehension of Fairness in Machine Learning
暂无分享,去创建一个
Michael Carl Tschantz | Michelle L. Mazurek | John P. Dickerson | Candice Schumann | Duncan C. McElfresh | Debjani Saha | M. Tschantz | Debjani Saha | Candice Schumann
[1] H. Akaike. A new look at the statistical model identification , 1974 .
[2] R. Sitgreaves. Psychometric theory (2nd ed.). , 1979 .
[3] G. Gigerenzer,et al. Simple tools for understanding risks: from innumeracy to insight , 2003, BMJ : British Medical Journal.
[4] Lisa M. Schwartz,et al. PSYCHOLOGICAL SCIENCE IN THE PUBLIC INTEREST Helping Doctors and Patients Make Sense of Health Statistics , 2022 .
[5] Bart Baesens,et al. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models , 2011, Decis. Support Syst..
[6] Toniann Pitassi,et al. Fairness through awareness , 2011, ITCS '12.
[7] Carlos Eduardo Scheidegger,et al. Certifying and Removing Disparate Impact , 2014, KDD.
[8] R. Hogarth,et al. Providing information for decision making: Contrasting description and simulation , 2015 .
[9] Michael Carl Tschantz,et al. Automated Experiments on Ad Privacy Settings , 2014, Proc. Priv. Enhancing Technol..
[10] Andrew D. Selbst,et al. Big Data's Disparate Impact , 2016 .
[11] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[12] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[13] Egan J. Chernoff,et al. Research on Teaching and Learning Probability , 2016 .
[14] Matt J. Kusner,et al. Counterfactual Fairness , 2017, NIPS.
[15] Michael Carl Tschantz,et al. Exploring User Perceptions of Discrimination in Online Targeted Advertising , 2017, USENIX Security Symposium.
[16] Alexandra Chouldechova,et al. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.
[17] Michael A. Osborne,et al. The future of employment: How susceptible are jobs to computerisation? , 2017 .
[18] Arvind Narayanan,et al. Semantics derived automatically from language corpora contain human-like biases , 2016, Science.
[19] Min Kyung Lee. Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division , 2017, CSCW.
[20] Jon M. Kleinberg,et al. On Fairness and Calibration , 2017, NIPS.
[21] Reuben Binns,et al. Fairness in Machine Learning: Lessons from Political Philosophy , 2017, FAT.
[22] Allison Woodruff,et al. A Qualitative Exploration of Perceptions of Algorithmic Fairness , 2018, CHI.
[23] Jun Zhao,et al. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.
[24] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[25] Krishna P. Gummadi,et al. Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction , 2018, WWW.
[26] Min Kyung Lee. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management , 2018, Big Data Soc..