暂无分享,去创建一个
Murat Kantarcioglu | Yan Zhou | Yasmeen Alufaisan | Jonathan Z. Bakdash | Laura R. Marusich | Murat Kantarcioglu | J. Bakdash | L. Marusich | Yan Zhou | Y. Alufaisan
[1] Gary Klein,et al. Metrics for Explainable AI: Challenges and Prospects , 2018, ArXiv.
[2] K. Crawford,et al. Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice , 2019 .
[3] Agustí Verde Parera,et al. General data protection regulation , 2018 .
[4] Galit Shmueli,et al. To Explain or To Predict? , 2010 .
[5] Himabindu Lakkaraju,et al. "How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations , 2019, AIES.
[6] Galit Shmueli,et al. To Explain or To Predict? , 2010, 1101.0891.
[7] R. Dawes,et al. Heuristics and Biases: Clinical versus Actuarial Judgment , 2002 .
[8] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[9] J. Bakdash,et al. Repeated Measures Correlation , 2017, Front. Psychol..
[10] Christopher D. Wickens,et al. When Users Want What's not Best for Them , 1995 .
[11] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[12] Gerd Gigerenzer,et al. Homo Heuristicus: Why Biased Minds Make Better Inferences , 2009, Top. Cogn. Sci..
[13] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[14] Daniel G. Goldstein,et al. Manipulating and Measuring Model Interpretability , 2018, CHI.
[15] Timothy D. Wilson,et al. Telling more than we can know: Verbal reports on mental processes. , 1977 .
[16] Gerd Gigerenzer,et al. Models of ecological rationality: the recognition heuristic. , 2002, Psychological review.
[17] Bhavani M. Thuraisingham,et al. From Myths to Norms: Demystifying Data Mining Models with Instance-Based Transparency , 2017, 2017 IEEE 3rd International Conference on Collaboration and Internet Computing (CIC).
[18] Hany Farid,et al. The accuracy, fairness, and limits of predicting recidivism , 2018, Science Advances.
[19] Limor Nadav-Greenberg,et al. Uncertainty Forecasts Improve Decision Making Among Nonexperts , 2009 .
[20] Johannes Gehrke,et al. Intelligible models for classification and regression , 2012, KDD.
[21] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[22] F. Keil,et al. Explanation and understanding , 2015 .
[23] Cynthia Rudin,et al. A Bayesian Framework for Learning Rule Sets for Interpretable Classification , 2017, J. Mach. Learn. Res..
[24] Michael Veale,et al. Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”? , 2018, IEEE Security & Privacy.
[25] Don N. Kleinmuntz,et al. Information Displays and Decision Processes , 1993 .
[26] Emily Chen,et al. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.
[27] Joshua de Leeuw,et al. jsPsych: A JavaScript library for creating behavioral experiments in a Web browser , 2014, Behavior Research Methods.
[28] Kathleen L. Mosier,et al. Does automation bias decision-making? , 1999, Int. J. Hum. Comput. Stud..
[29] Murat Kantarcioglu,et al. Detecting Discrimination in a Black-Box Classifier , 2016, 2016 IEEE 2nd International Conference on Collaboration and Internet Computing (CIC).
[30] Yair Zick,et al. Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).
[31] N. McGlynn. Thinking fast and slow. , 2014, Australian veterinary journal.
[32] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[33] Yunfeng Zhang,et al. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making , 2020, FAT*.
[34] Krzysztof Z. Gajos,et al. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems , 2020, IUI.
[35] Mary Missy Cummings,et al. Man versus Machine or Man + Machine? , 2014, IEEE Intelligent Systems.
[36] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[37] Sharad Goel,et al. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning , 2018, ArXiv.
[38] Kacper Sokol,et al. Fairness, Accountability and Transparency in Artificial Intelligence: A Case Study of Logical Predictive Models , 2019, AIES.
[39] Vivian Lai,et al. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection , 2018, FAT.
[40] BEN GREEN,et al. The Principles and Limits of Algorithm-in-the-Loop Decision Making , 2019, Proc. ACM Hum. Comput. Interact..