暂无分享,去创建一个
[1] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[2] Antonio Vetro,et al. On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI - Three Challenges for Future Research , 2020, Inf..
[3] Bolei Zhou,et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[5] Henning Müller,et al. Regression Concept Vectors for Bidirectional Explanations in Histopathology , 2018, MLCN/DLF/iMIMIC@MICCAI.
[6] Luciano Serafini,et al. Logic Tensor Networks , 2020, Artif. Intell..
[7] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Artur d'Avila Garcez,et al. Neural-Symbolic Integration for Fairness in AI , 2021, AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering.
[9] GiannottiFosca,et al. A Survey of Methods for Explaining Black Box Models , 2018 .
[10] Hrituraj Singh,et al. Exploring Neural Models for Parsing Natural Language into First-Order Logic , 2020, ArXiv.
[11] L. F. Molerio-Leon,et al. Survey Methods , 2011 .
[12] Artur d'Avila Garcez,et al. Layerwise Knowledge Extraction from Deep Convolutional Networks , 2020, ArXiv.
[13] Tarek R. Besold,et al. A historical perspective of explainable Artificial Intelligence , 2020, WIREs Data Mining Knowl. Discov..
[14] Artur S. d'Avila Garcez,et al. Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge , 2016, NeSy@HLAI.