暂无分享,去创建一个
Emma Brunskill | Sharad Goel | Alex Chohlas-Wood | Madison Coots | Sharad Goel | Alex Chohlas-Wood | E. Brunskill | Madison Coots
[1] Alexandra Chouldechova,et al. The Frontiers of Fairness in Machine Learning , 2018, ArXiv.
[2] R. Hubbard,et al. Association of Rideshare-Based Transportation Services and Missed Primary Care Appointments: A Clinical Trial , 2018, JAMA internal medicine.
[3] Michael Carl Tschantz,et al. Discrimination in Online Advertising: A Multidisciplinary Inquiry , 2018 .
[4] Silvia Chiappa,et al. A Causal Bayesian Networks Viewpoint on Fairness , 2018, Privacy and Identity Management.
[5] Ravi Shroff,et al. Predictive Analytics for City Agencies: Lessons from Children's Services , 2017, Big Data.
[6] Ilya Shpitser,et al. Fair Inference on Outcomes , 2017, AAAI.
[7] Christopher T. Lowenkamp,et al. Gender, risk assessment, and sanctioning: The cost of treating women like men. , 2016, Law and human behavior.
[8] Andreas Krause,et al. Active Learning for Multi-Objective Optimization , 2013, ICML.
[9] Timnit Gebru,et al. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.
[10] S. Maru,et al. Rides for Refugees: A Transportation Assistance Pilot for Women’s Health , 2019, Journal of Immigrant and Minority Health.
[11] Dan Jurafsky,et al. Racial disparities in automated speech recognition , 2020, Proceedings of the National Academy of Sciences.
[12] Yuriy Brun,et al. Preventing undesirable behavior of intelligent machines , 2019, Science.
[13] Elias Bareinboim,et al. Fairness in Decision-Making - The Causal Explanation Formula , 2018, AAAI.
[14] Alexandra Chouldechova,et al. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions , 2018, FAT.
[15] Alexandra Chouldechova,et al. A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores , 2020, CHI.
[16] Ravi Shroff,et al. The accuracy, equity, and jurisprudence of criminal risk assessment , 2021, Research Handbook on Big Data Law.
[17] G. Imbens,et al. The Surrogate Index: Combining Short-Term Proxies to Estimate Long-Term Treatment Effects More Rapidly and Precisely , 2019 .
[18] G. Andrew,et al. arm: Data Analysis Using Regression and Multilevel/Hierarchical Models , 2014 .
[19] Vashist Avadhanula,et al. A Near-Optimal Exploration-Exploitation Approach for Assortment Selection , 2016, EC.
[20] E. Bakshy,et al. Preference Learning for Real-World Multi-Objective Decision Making , 2020 .
[21] K. Maddulety,et al. Machine Learning in Banking Risk Management: A Literature Review , 2019, Risks.
[22] Esther Rolf,et al. Balancing Competing Objectives with Noisy Data: Score-Based Classifiers for Welfare-Aware Machine Learning , 2020, ICML.
[23] Yuriy Brun,et al. Offline Contextual Bandits with High Probability Fairness Guarantees , 2019, NeurIPS.
[24] John Langford,et al. Resourceful Contextual Bandits , 2014, COLT.
[25] Joseph Hilbe,et al. Data Analysis Using Regression and Multilevel/Hierarchical Models , 2009 .
[26] Brendan T. O'Connor,et al. Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English , 2017, ArXiv.
[27] Sharad Goel,et al. Breaking Taboos in Fair Machine Learning: An Experimental Study , 2021, EAAMO.
[28] Richard J. Lemke,et al. The Creation and Validation of the Ohio Risk Assessment System ( ORAS ) , 2010 .
[29] Aleksandrs Slivkins,et al. Introduction to Multi-Armed Bandits , 2019, Found. Trends Mach. Learn..
[30] Hanghang Tong,et al. PC-Fairness: A Unified Framework for Measuring Causality-based Fairness , 2019, NeurIPS.
[31] Nikhil R. Devanur,et al. An efficient algorithm for contextual bandits with knapsacks, and an extension to concave objectives , 2015, COLT.
[32] Alexandra Chouldechova,et al. Counterfactual risk assessments, evaluation, and fairness , 2020, FAT*.
[33] Wei Chu,et al. Preference learning with Gaussian processes , 2005, ICML.
[34] Yixin Wang,et al. Equal Opportunity and Affirmative Action via Counterfactual Predictions , 2019, ArXiv.
[35] R. Srikant,et al. Algorithms with Logarithmic or Sublinear Regret for Constrained Contextual Bandits , 2015, NIPS.
[36] Bernhard Schölkopf,et al. Avoiding Discrimination through Causal Reasoning , 2017, NIPS.
[37] Alexandra Chouldechova,et al. Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting , 2019, FAT.
[38] Inioluwa Deborah Raji,et al. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products , 2019, AIES.
[39] A. Chouldechova,et al. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services , 2019, CHI.
[40] S. Goodman,et al. Machine Learning, Health Disparities, and Causal Reasoning , 2018, Annals of Internal Medicine.
[41] Carlos Eduardo Scheidegger,et al. Certifying and Removing Disparate Impact , 2014, KDD.
[42] R. Hubbard,et al. Rideshare-Based Medical Transportation for Medicaid Patients and Primary Care Show Rates: A Difference-in-Difference Analysis of a Pilot Program , 2018, Journal of General Internal Medicine.
[43] Sharad Goel,et al. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning , 2018, ArXiv.
[44] Brian W. Powers,et al. Dissecting racial bias in an algorithm used to manage the health of populations , 2019, Science.
[45] Avi Feller,et al. Algorithmic Decision Making and the Cost of Fairness , 2017, KDD.
[46] Alexander M. Holsinger,et al. Pretrial Risk Assessment: Improving Public Safety and Fairness in Pretrial Decision Making , 2015 .
[47] Christopher T. Lowenkamp,et al. Special Issue: Evidence-Based Practices in Action *30IMPLEMENTING RISK ASSESSMENT IN THE FEDERAL PRETRIAL SERVICES SYSTEM , 2011 .
[48] Eyke Hüllermeier,et al. Preference Learning and Ranking by Pairwise Comparison , 2010, Preference Learning.
[49] Luca Oneto,et al. Fairness in Machine Learning , 2020, INNSBDDL.
[50] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[51] Arvind Narayanan,et al. Semantics derived automatically from language corpora contain human-like biases , 2016, Science.
[52] A. Korolova,et al. Discrimination through Optimization , 2019, Proc. ACM Hum. Comput. Interact..