Ontology-based Interpretable Machine Learning for Textual Data
暂无分享,去创建一个
Dejing Dou | Han Hu | NhatHai Phan | David Newman | Phung Lai | Anuja Badeti | D. Dou | Nhathai Phan | Han Hu | Phung Lai | David Newman | Anuja Badeti
[1] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[2] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[3] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[4] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[5] Daniel S. Weld,et al. Open Information Extraction Using Wikipedia , 2010, ACL.
[6] Riccardo Satta,et al. LEAFAGE: Example-based and Feature importance-based Explanations for Black-box ML models , 2018, 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).
[7] Oren Etzioni,et al. Adapting Open Information Extraction to Domain-Specific Relations , 2010, AI Mag..
[8] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[9] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[10] Oren Etzioni,et al. Open Information Extraction from the Web , 2007, CACM.
[11] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[12] E. Forgy,et al. Cluster analysis of multivariate data : efficiency versus interpretability of classifications , 1965 .
[13] James Bailey,et al. Improving the Quality of Explanations with Local Embedding Perturbations , 2019, KDD.
[14] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[15] Hao Wang,et al. Ontology-based deep learning for human behavior prediction with explanations in health social networks , 2017, Inf. Sci..
[16] Klaus-Robert Müller,et al. "What is relevant in a text document?": An interpretable machine learning approach , 2016, PloS one.
[17] Nitesh V. Chawla,et al. MOOC Dropout Prediction: Lessons Learned from Making Pipelines Interpretable , 2017, WWW.
[18] S. Barlas. Prescription drug abuse hits hospitals hard: tighter federal steps aim to deflate crisis. , 2013, P & T : a peer-reviewed journal for formulary management.
[19] Oren Etzioni,et al. Open Language Learning for Information Extraction , 2012, EMNLP.
[20] Freddy Lécué,et al. Semantic Explanations of Predictions , 2018, ArXiv.
[21] Fei-Fei Li,et al. What Does Classifying More Than 10, 000 Image Categories Tell Us? , 2010, ECCV.
[22] Yash Goyal,et al. Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[24] Ankur Taly,et al. Gradients of Counterfactuals , 2016, ArXiv.
[25] Foster J. Provost,et al. Explaining Data-Driven Document Classifications , 2013, MIS Q..
[26] Juan Enrique Ramos,et al. Using TF-IDF to Determine Word Relevance in Document Queries , 2003 .
[27] Holger Knublauch,et al. The Protégé OWL Plugin: An Open Development Environment for Semantic Web Applications , 2004, SEMWEB.
[28] Soon Ae Chun,et al. An Ensemble Deep Learning Model for Drug Abuse Detection in Sparse Twitter-Sphere , 2019, MedInfo.
[29] Oren Etzioni,et al. Identifying Relations for Open Information Extraction , 2011, EMNLP.
[30] Tillman Weyde,et al. An Ontology-based Approach to Explaining Artificial Neural Networks , 2019, ArXiv.
[31] Marko Robnik-Sikonja,et al. Explaining Classifications For Individual Instances , 2008, IEEE Transactions on Knowledge and Data Engineering.
[32] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[33] Riccardo Satta,et al. Example and Feature importance-based Explanations for Black-box Machine Learning Models , 2018, ArXiv.