GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
暂无分享,去创建一个
[1] Duen Horng Chau,et al. TimberTrek: Exploring and Curating Sparse Decision Trees with Interactive Visualization , 2022, 2022 IEEE Visualization and Visual Analytics (VIS).
[2] Zijie J. Wang,et al. Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values , 2022, KDD.
[3] E. Bertini,et al. Context sight: model understanding and debugging via interpretable context , 2022, HILDA@SIGMOD.
[4] Enrico Bertini,et al. AdViCE: Aggregated Visual Counterfactual Explanations for Machine Learning Model Validation , 2021, 2021 IEEE Visualization Conference (VIS).
[5] Himabindu Lakkaraju,et al. Counterfactual Explanations Can Be Manipulated , 2021, NeurIPS.
[6] Mark T. Keane,et al. If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques , 2021, IJCAI.
[7] Arvind Satyanarayan,et al. Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs , 2021, CHI.
[8] Maximilian Schleich,et al. GeCo: Quality Counterfactual Explanations in Real Time , 2021, Proc. VLDB Endow..
[9] Jeffrey Heer,et al. Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models , 2021, ACL.
[10] John P. Dickerson,et al. Counterfactual Explanations for Machine Learning: A Review , 2020, ArXiv.
[11] Ben Shneiderman,et al. Bridging the Gap Between Ethics and Practice , 2020, ACM Trans. Interact. Intell. Syst..
[12] Gilles Barthe,et al. Scaling Guarantees for Nearest Counterfactual Explanations , 2020, AIES.
[13] Bernhard Schölkopf,et al. A survey of algorithmic recourse: definitions, formulations, solutions, and prospects , 2020, ArXiv.
[14] Mark T. Keane,et al. Instance-Based Counterfactual Explanations for Time Series Classification , 2020, ICCBR.
[15] Himabindu Lakkaraju,et al. Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses , 2020, NeurIPS.
[16] Mark T. Keane,et al. On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning , 2020, AAAI.
[17] Huamin Qu,et al. DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models , 2020, IEEE Transactions on Visualization and Computer Graphics.
[18] Ken Kobayashi,et al. DACE: Distribution-Aware Counterfactual Explanation by Mixed-Integer Linear Optimization , 2020, IJCAI.
[19] Rich Caruana,et al. How Interpretable and Trustworthy are GAMs? , 2020, KDD.
[20] Julius von Kügelgen,et al. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach , 2020, NeurIPS.
[21] Barry Smyth,et al. Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI) , 2020, ICCBR.
[22] C. Rudin,et al. In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction , 2020, Journal of Quantitative Criminology.
[23] Duen Horng Chau,et al. CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization , 2020, IEEE Transactions on Visualization and Computer Graphics.
[24] Amit Pitaru,et al. Teachable Machine: Approachable Web-Based Tool for Exploring Machine Learning Classification , 2020, CHI Extended Abstracts.
[25] E. Bertini,et al. ViCE: visual counterfactual explanations for machine learning models , 2020, IUI.
[26] Bernhard Schölkopf,et al. Algorithmic Recourse: from Counterfactual Explanations to Interventions , 2020, FAccT.
[27] Manuel Gomez-Rodriguez,et al. Decisions, Counterfactual Explanations and Strategic Behavior , 2020, NeurIPS.
[28] Jichen Zhu,et al. Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples , 2020, ArXiv.
[29] Solon Barocas,et al. The hidden assumptions behind counterfactual explanations and principal reasons , 2019, FAT*.
[30] Amit Sharma,et al. Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers , 2019, ArXiv.
[31] Dongwon Lee,et al. GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model's Prediction , 2019, KDD.
[32] K. Batmanghelich,et al. Explanation by Progressive Exaggeration , 2019, ICLR.
[33] S. Drucker,et al. TeleGam: Combining Visualization and Verbalization for Interpretable Machine Learning , 2019, 2019 IEEE Visualization Conference (VIS).
[34] Rich Caruana,et al. InterpretML: A Unified Framework for Machine Learning Interpretability , 2019, ArXiv.
[35] Oluwasanmi Koyejo,et al. Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems , 2019, ArXiv.
[36] Martin Wattenberg,et al. The What-If Tool: Interactive Probing of Machine Learning Models , 2019, IEEE Transactions on Visualization and Computer Graphics.
[37] Janis Klaise,et al. Interpretable Counterfactual Explanations Guided by Prototypes , 2019, ECML/PKDD.
[38] Amir-Hossein Karimi,et al. Model-Agnostic Counterfactual Explanations for Consequential Decisions , 2019, AISTATS.
[39] Amit Sharma,et al. Explaining machine learning classifiers through diverse counterfactual explanations , 2019, FAT*.
[40] Haiyi Zhu,et al. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders , 2019, CHI.
[41] Steven M. Drucker,et al. Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models , 2019, CHI.
[42] Ziyan Wu,et al. Counterfactual Visual Explanations , 2019, ICML.
[43] Duen Horng Chau,et al. Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations , 2019, IEEE Transactions on Visualization and Computer Graphics.
[44] Subbarao Kambhampati,et al. Towards Understanding User Preferences for Explanation Types in Model Reconciliation , 2019, 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[45] Chris Russell,et al. Efficient Search for Diverse Coherent Explanations , 2019, FAT.
[46] Cornelius J. König,et al. Psychology Meets Machine Learning: Interdisciplinary Perspectives on Algorithmic Job Candidate Screening , 2018 .
[47] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[48] Chris Russell,et al. Explaining Explanations in AI , 2018, FAT.
[49] Rich Caruana,et al. Axiomatic Interpretability for Multiclass Additive Models , 2018, KDD.
[50] Yang Liu,et al. Actionable Recourse in Linear Classification , 2018, FAT.
[51] Martin Wattenberg,et al. GAN Lab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation , 2018, IEEE Transactions on Visualization and Computer Graphics.
[52] Jon M. Kleinberg,et al. How Do Classifiers Induce Agents to Invest Effort Strategically? , 2018, EC.
[53] Mohan S. Kankanhalli,et al. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.
[54] Daniel S. Weld,et al. The challenge of crafting intelligible intelligence , 2018, Commun. ACM.
[55] Amit Dhurandhar,et al. Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives , 2018, NeurIPS.
[56] Rosane Minghim,et al. A Visual Approach for Interactive Keyterm-Based Clustering , 2018, ACM Trans. Interact. Intell. Syst..
[57] Solon Barocas,et al. The Intuitive Appeal of Explainable Machines , 2018 .
[58] Elmar Eisemann,et al. DeepEyes: Progressive Visual Analytics for Designing Deep Neural Networks , 2018, IEEE Transactions on Visualization and Computer Graphics.
[59] Tie-Yan Liu,et al. LightGBM: A Highly Efficient Gradient Boosting Decision Tree , 2017, NIPS.
[60] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[61] Martin Wattenberg,et al. Direct-Manipulation Visualization of Deep Networks , 2017, ArXiv.
[62] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[63] Minsuk Kahng,et al. ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models , 2017, IEEE Transactions on Visualization and Computer Graphics.
[64] Mark O. Riedl,et al. Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations , 2017, AIES.
[65] T. Lombrozo. Explanatory Preferences Shape Learning and Inference , 2016, Trends in Cognitive Sciences.
[66] Kenney Ng,et al. Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models , 2016, CHI.
[67] Tianqi Chen,et al. XGBoost: A Scalable Tree Boosting System , 2016, KDD.
[68] Johannes Gehrke,et al. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.
[69] Yixin Chen,et al. Optimal Action Extraction for Random Forests and Boosted Trees , 2015, KDD.
[70] Christos H. Papadimitriou,et al. Strategic Classification , 2015, ITCS.
[71] Aleksandrs Slivkins,et al. Incentivizing high quality crowdwork , 2015, SECO.
[72] Jesse J. Chandler,et al. Inside the Turk , 2014 .
[73] Judith S. Olson,et al. Ways of Knowing in HCI , 2014, Springer New York.
[74] Susan T. Dumais,et al. Understanding User Behavior Through Log Data and Analysis , 2014, Ways of Knowing in HCI.
[75] Johannes Gehrke,et al. Accurate intelligible models with pairwise interactions , 2013, KDD.
[76] Risto Miikkulainen,et al. GRADE: Machine Learning Support for Graduate Admissions , 2013, AI Mag..
[77] Alison Shames. Performance Incentive Funding: Aligning Fiscal and Operational Responsibility to Produce More Safety at Less Cost , 2013 .
[78] Johannes Gehrke,et al. Intelligible models for classification and regression , 2012, KDD.
[79] Jeffrey Heer,et al. D³ Data-Driven Documents , 2011, IEEE Transactions on Visualization and Computer Graphics.
[80] I-Cheng Yeh,et al. The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients , 2009, Expert Syst. Appl..
[81] Aniket Kittur,et al. Crowdsourcing user studies with Mechanical Turk , 2008, CHI.
[82] J. Grego,et al. Fast stable direct fitting and smoothness selection for generalized additive models , 2006, 0709.3906.
[83] Naeem Siddiqi,et al. Credit Risk Scorecards: Developing and Implementing Intelligent Credit Scoring , 2005 .
[84] Michael Redmond,et al. A data-driven software tool for enabling cooperative information sharing among police departments , 2002, Eur. J. Oper. Res..
[85] Eric R. Ziegel,et al. Generalized Linear Models , 2002, Technometrics.
[86] Ben Shneiderman,et al. The eyes have it: a task by data type taxonomy for information visualizations , 1996, Proceedings 1996 IEEE Symposium on Visual Languages.
[87] Ron Kohavi,et al. Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid , 1996, KDD.
[88] Simson L. Garfinkel,et al. PGP: Pretty Good Privacy , 1994 .
[89] F. Glover. IMPROVED LINEAR INTEGER PROGRAMMING FORMULATIONS OF NONLINEAR INTEGER PROBLEMS , 1975 .
[90] Harvey M. Salkin,et al. The knapsack problem: A survey , 1975 .
[91] Marco Tulio Ribeiro,et al. “ Why Should I Trust You ? ” Explaining the Predictions of Any Classifier , 2016 .
[92] Matthew J. Saltzman,et al. Coin-Or: An Open-Source Library for Optimization , 2002 .
[93] R. Pea. User Centered System Design: New Perspectives on Human-Computer Interaction , 1987 .
[94] D. Norman. User Centered System Design , 1986 .
[95] Marco Tulio Ribeiro. “Why Should I Trust You?” Explaining the Predictions of Any Classifier , 2022 .