Guided-LIME: Structured Sampling based Hybrid Approach towards Explaining Blackbox Machine Learning Models
暂无分享,去创建一个
Lovekesh Vig | C. Anantaram | Mouli Rastogi | Amit Sangroya | L. Vig | Amit Sangroya | C. Anantaram | M. Rastogi
[1] Jeremy Nixon,et al. Measuring Calibration in Deep Learning , 2019, CVPR Workshops.
[2] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[3] Katia P. Sycara,et al. Transparency and Explanation in Deep Reinforcement Learning Neural Networks , 2018, AIES.
[4] Randy Kerber,et al. ChiMerge: Discretization of Numeric Attributes , 1992, AAAI.
[5] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[6] Omar H. Karam,et al. Feature Analysis of Coronary Artery Heart Disease Data Sets , 2015 .
[7] Evangelos Kanoulas,et al. Global Aggregations of Local Explanations for Black Box models , 2019, ArXiv.
[8] G. G. Stokes. "J." , 1890, The New Yale Book of Quotations.
[9] Alexander Binder,et al. Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers , 2016, ICANN.
[10] Sven Behnke,et al. Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Jure Leskovec,et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.
[12] Esther de Ves,et al. FCA-based knowledge representation and local generalized linear models to address relevance and diversity in diverse social images , 2019, Future Gener. Comput. Syst..
[13] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[14] C. Anantaram,et al. Using Formal Concept Analysis to Explain Black Box Deep Learning Classification Models , 2019, FCA4AI@IJCAI.
[15] Sergei O. Kuznetsov,et al. Fitting Pattern Structures to Knowledge Discovery in Big Data , 2013, ICFCA.
[16] 方华. google,我,萨娜 , 2006 .
[17] Kate Saenko,et al. Black-box Explanation of Object Detectors via Saliency Maps , 2020, ArXiv.
[18] Stefano Ermon,et al. Accurate Uncertainties for Deep Learning Using Calibrated Regression , 2018, ICML.
[19] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[20] Brandon M. Greenwell,et al. Interpretable Machine Learning , 2019, Hands-On Machine Learning with R.
[21] Tsuyoshi Murata,et al. {m , 1934, ACML.
[22] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[23] Tommi S. Jaakkola,et al. On the Robustness of Interpretability Methods , 2018, ArXiv.
[24] Le Song,et al. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation , 2018, ICML.
[25] Bolei Zhou,et al. Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[27] Joachim Diederich,et al. Survey and critique of techniques for extracting rules from trained artificial neural networks , 1995, Knowl. Based Syst..
[28] Daniel Omeiza,et al. Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models , 2019, ArXiv.