暂无分享,去创建一个
Maria Trocan | Mauro Dragoni | Artur d'Avila Garcez | Anna Saranti | Benedikt Wagner | Silvia Tulli | Adrien Bennetot | Andreas Holzinger | Natalia Díaz Rodríguez | Ivan Donadello | Ayoub El Qadi | Thomas Frossard | Raja Chatila | A. Garcez | M. Dragoni | M. Trocan | Anna Saranti | Andreas Holzinger | Raja Chatila | Benedikt Wagner | Silvia Tulli | Thomas Frossard | Adrien Bennetot | Ivan Donadello | Ayoub El Qadi El Haouari | Anna Sarranti | Artur d'Avila Garcez | Natalia Díaz-Rodríguez
[1] Bilal Alsallakh,et al. Captum: A unified and generic model interpretability library for PyTorch , 2020, ArXiv.
[2] M. de Rijke,et al. Learning to Explain Entity Relationships in Knowledge Graphs , 2015, ACL.
[3] Stefan Feuerriegel,et al. Learning Interpretable Negation Rules via Weak Supervision at Document Level: A Reinforcement Learning Approach , 2019, NAACL.
[4] Derek Doran,et al. What Does Explainable AI Really Mean? A New Conceptualization of Perspectives , 2017, CEx@AI*IA.
[5] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[6] Andreas Holzinger,et al. Interactive machine learning for health informatics: when do we need the human-in-the-loop? , 2016, Brain Informatics.
[7] Artur d'Avila Garcez,et al. Neural-Symbolic Integration for Fairness in AI , 2021, AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering.
[8] Amit Sharma,et al. Explaining machine learning classifiers through diverse counterfactual explanations , 2020, FAT*.
[9] Maria Trocan,et al. Explaining Credit Risk Scoring through Feature Contribution Alignment with Expert Risk Analysts , 2021, ArXiv.
[10] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[11] Roberto Basili,et al. Auditing Deep Learning processes through Kernel-based Explanatory Models , 2019, EMNLP.
[12] Rieks op den Akker,et al. Tailored motivational message generation: A model and practical framework for real-time physical activity coaching , 2015, J. Biomed. Informatics.
[13] Klaus-Robert Müller,et al. Explaining Recurrent Neural Network Predictions in Sentiment Analysis , 2017, WASSA@EMNLP.
[14] Kun Qian,et al. A Survey of the State of Explainable AI for Natural Language Processing , 2020, AACL/IJCNLP.
[15] Shinichi Nakajima,et al. Higher-Order Explanations of Graph Neural Networks via Relevant Walks , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[16] Alexander Binder,et al. Explaining nonlinear classification decisions with deep Taylor decomposition , 2015, Pattern Recognit..
[17] Hiroya Inakoshi,et al. ERIC: Extracting Relations Inferred from Convolutions , 2020, ACCV.
[18] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[19] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[20] 知秀 柴田. 5分で分かる!? 有名論文ナナメ読み:Jacob Devlin et al. : BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding , 2020 .
[21] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[22] Artur S. d'Avila Garcez,et al. Measurable Counterfactual Local Explanations for Any Classifier , 2019, ECAI.
[23] Artur S. d'Avila Garcez,et al. Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge , 2016, NeSy@HLAI.
[24] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[25] Toniann Pitassi,et al. Fairness through awareness , 2011, ITCS '12.
[26] Anna Saranti,et al. Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI , 2021, Inf. Fusion.
[27] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[28] Yoshua Bengio,et al. Saliency is a Possible Red Herring When Diagnosing Poor Generalization , 2021, ICLR.
[29] Andreas Holzinger,et al. Property-Based Testing for Parameter Learning of Probabilistic Graphical Models , 2020, CD-MAKE.
[30] Francisco Herrera,et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.
[31] Marco Guerini,et al. A TAXONOMY OF STRATEGIES FOR MULTIMODAL PERSUASIVE MESSAGE GENERATION , 2007, Appl. Artif. Intell..
[32] Scott M. Lundberg,et al. Consistent Individualized Feature Attribution for Tree Ensembles , 2018, ArXiv.
[33] Abhishek Das,et al. Grad-CAM: Why did you say that? , 2016, ArXiv.
[34] Nitesh V. Chawla,et al. SMOTE: Synthetic Minority Over-sampling Technique , 2002, J. Artif. Intell. Res..
[35] Ivan Donadello,et al. Persuasive Explanation of Reasoning Inferences on Dietary Data , 2019, PROFILES/SEMEX@ISWC.
[36] Erik Cambria,et al. Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph , 2018, ACL.
[37] Carlos Guestrin,et al. Model-Agnostic Interpretability of Machine Learning , 2016, ArXiv.