FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles
暂无分享,去创建一个
M. de Rijke | Hinda Haned | Maarten de Rijke | Harrie Oosterhuis | Ana Lucic | Harrie Oosterhuis | H. Haned | Ana Lucic
[1] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2017, Pattern Recognit..
[2] Freddy Lécué,et al. Interpretable Credit Application Predictions With Counterfactual Explanations , 2018, NIPS 2018.
[3] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[4] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[5] Solon Barocas,et al. The hidden assumptions behind counterfactual explanations and principal reasons , 2019, FAT*.
[6] Amir-Hossein Karimi,et al. Model-Agnostic Counterfactual Explanations for Consequential Decisions , 2019, AISTATS.
[7] Hiroki Arimura,et al. DACE: Distribution-Aware Counterfactual Explanation by Mixed-Integer Linear Optimization , 2020, IJCAI.
[8] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[9] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[10] Hans-Peter Kriegel,et al. LOF: identifying density-based local outliers , 2000, SIGMOD 2000.
[11] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[12] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[13] Oluwasanmi Koyejo,et al. Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems , 2019, ArXiv.
[14] Randall Balestriero,et al. Neural Decision Trees , 2017, ArXiv.
[15] Marie-Jeanne Lesot,et al. Inverse Classification for Comparison-based Interpretability in Machine Learning , 2017, ArXiv.
[16] Tim Miller,et al. Explainable Reinforcement Learning Through a Causal Lens , 2019, AAAI.
[17] Bernhard Schölkopf,et al. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach , 2020, NeurIPS.
[18] Janis Klaise,et al. Interpretable Counterfactual Explanations Guided by Prototypes , 2019, ECML/PKDD.
[19] Yang Liu,et al. Actionable Recourse in Linear Classification , 2018, FAT.
[20] Fabrizio Silvestri,et al. Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking , 2017, KDD.
[21] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[22] Bernhard Schölkopf,et al. Algorithmic Recourse: from Counterfactual Explanations to Interventions , 2020, ArXiv.
[23] Amit Sharma,et al. Explaining machine learning classifiers through diverse counterfactual explanations , 2020, FAT*.
[24] Amit Dhurandhar,et al. Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives , 2018, NeurIPS.
[25] Mihaela van der Schaar,et al. Deep Counterfactual Networks with Propensity-Dropout , 2017, ArXiv.
[26] Peter A. Flach,et al. FACE: Feasible and Actionable Counterfactual Explanations , 2020, AIES.
[27] Chris Russell,et al. Efficient Search for Diverse Coherent Explanations , 2019, FAT.