暂无分享,去创建一个
Amit Dhurandhar | Rachel K. E. Bellamy | Kush R. Varshney | Aleksandra Mojsilovic | Karthikeyan Shanmugam | Ronny Luss | Michael Hind | Stephanie Houde | Q. Vera Liao | Prasanna Sattigeri | Sami Mourad | Ramya Raghavendra | Samuel C. Hoffman | Pablo Pedemonte | Dennis Wei | Yunfeng Zhang | John Richards | Pin-Yu Chen | Vijay Arya | Moninder Singh | Karthikeyan Shanmugam | R. Raghavendra | Moninder Singh | P. Sattigeri | A. Mojsilovic | Amit Dhurandhar | R. Bellamy | Dennis Wei | Q. Liao | K. Varshney | Ronny Luss | M. Hind | John T. Richards | Yunfeng Zhang | Pin-Yu Chen | Vijay Arya | Stephanie Houde | Sami Mourad | Pablo Pedemonte
[1] Alun D. Preece,et al. Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems , 2018, ArXiv.
[2] Charu C. Aggarwal,et al. Efficient Data Representation by Selecting Prototypes with Importance Weights , 2017, 2019 IEEE International Conference on Data Mining (ICDM).
[3] Luciano Floridi,et al. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .
[4] Tommi S. Jaakkola,et al. Towards Robust Interpretability with Self-Explaining Neural Networks , 2018, NeurIPS.
[5] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[6] Amit Dhurandhar,et al. Leveraging Latent Features for Local Explanations , 2019, KDD.
[7] Sanjeeb Dash,et al. Boolean Decision Rules via Column Generation , 2018, NeurIPS.
[8] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[9] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[10] Abhishek Kumar,et al. Variational Inference of Disentangled Latent Concepts from Unlabeled Observations , 2017, ICLR.
[11] Ziming Huang,et al. On Sample Based Explanation Methods for NLP: Faithfulness, Efficiency and Semantic Evaluation , 2021, Annual Meeting of the Association for Computational Linguistics.
[12] Amit Dhurandhar,et al. Improving Simple Models with Confidence Profiles , 2018, NeurIPS.
[13] Michael Hind,et al. Explaining explainable AI , 2019, XRDS.
[14] Julia Powles,et al. "Meaningful Information" and the Right to Explanation , 2017, FAT.
[15] Babak Salimi,et al. Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals , 2021, SIGMOD Conference.
[16] Amit Dhurandhar,et al. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques , 2019, ArXiv.
[17] Amit Dhurandhar,et al. TED: Teaching AI to Explain its Decisions , 2018, AIES.
[18] Sanjeeb Dash,et al. Generalized Linear Rule Models , 2019, ICML.
[19] Seth Flaxman,et al. EU regulations on algorithmic decision-making and a "right to explanation" , 2016, ArXiv.
[20] Suhang Wang,et al. GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model's Prediction , 2020, KDD.
[21] Jianbo Li,et al. Outlier Impact Characterization for Time Series Data , 2021, AAAI.
[22] Amit Dhurandhar,et al. Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives , 2018, NeurIPS.