Explainable Artificial Intelligence Approaches: A Survey
暂无分享,去创建一个
William Eberle | Mohiuddin Ahmed | Sheikh Rabiul Islam | Sheikh Khaled Ghafoor | Mohiuddin Ahmed | W. Eberle | S. Ghafoor
[1] Cuntai Guan,et al. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI , 2019, IEEE Transactions on Neural Networks and Learning Systems.
[2] Brandon M. Greenwell,et al. Hands-On Machine Learning with R , 2019 .
[3] G. A. Miller. THE PSYCHOLOGICAL REVIEW THE MAGICAL NUMBER SEVEN, PLUS OR MINUS TWO: SOME LIMITS ON OUR CAPACITY FOR PROCESSING INFORMATION 1 , 1956 .
[4] Daniel G. Goldstein,et al. Manipulating and Measuring Model Interpretability , 2018, CHI.
[5] B. Bischl,et al. Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability , 2019, PKDD/ECML Workshops.
[6] Cengiz Öztireli,et al. Towards better understanding of gradient-based attribution methods for Deep Neural Networks , 2017, ICLR.
[7] Marcel van Gerven,et al. Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges , 2018, ArXiv.
[8] Francisco Herrera,et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.
[9] Cynthia Rudin,et al. Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the "Rashomon" Perspective , 2018 .
[10] María José del Jesús,et al. Evolutionary Fuzzy Systems for Explainable Artificial Intelligence: Why, When, What for, and Where to? , 2019, IEEE Computational Intelligence Magazine.
[11] Lalana Kagal,et al. Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).
[12] Ping Wang,et al. Measuring Interpretability for Different Types of Machine Learning Models , 2018, PAKDD.
[13] Filip Karlo Dosilovic,et al. Explainable artificial intelligence: A survey , 2018, 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO).
[14] Bart Baesens,et al. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models , 2011, Decis. Support Syst..
[15] Ruocheng Guo,et al. Causal Interpretability for Machine Learning - Problems, Methods and Evaluation , 2020, SIGKDD Explor..
[16] M. Bouaziz,et al. An Introduction to Computer Security , 2012 .
[17] Alexander Jung,et al. An Information-Theoretic Approach to Personalized Explainable Machine Learning , 2020, IEEE Signal Processing Letters.
[18] William Eberle,et al. Infusing domain knowledge in AI-based "black box" models for better explainability with application in bankruptcy prediction , 2019, ArXiv.
[19] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[20] Johannes Fürnkranz,et al. Rule Learning in a Nutshell , 2012 .
[21] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[22] Ugur Kursuncu,et al. Knowledge Infused Learning (K-IL): Towards Deep Incorporation of Knowledge in Deep Learning , 2020, AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering.
[23] Nicholas Diakopoulos,et al. Algorithmic Accountability , 2015 .
[24] Andrea Roli,et al. A neural network approach for credit risk evaluation , 2008 .
[25] Carlos Eduardo Scheidegger,et al. Assessing the Local Interpretability of Machine Learning Models , 2019, ArXiv.
[26] Klaus-Robert Müller,et al. Explainable artificial intelligence , 2017 .
[27] William Eberle,et al. Towards Quantification of Explainability in Explainable Artificial Intelligence Methods , 2019, FLAIRS.
[28] William Eberle,et al. Domain Knowledge Aided Explainable Artificial Intelligence for Intrusion Detection and Response , 2020, AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering.
[29] Le Song,et al. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation , 2018, ICML.
[30] Joachim Fabini,et al. Explainability and Adversarial Robustness for RNNs , 2019, 2020 IEEE Sixth International Conference on Big Data Computing Service and Applications (BigDataService).
[31] Johanna D. Moore,et al. Explanation in second generation expert systems , 1993 .
[32] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[33] Stefan Rüping,et al. Learning interpretable models , 2006 .
[34] Daniel L. Marino,et al. An Adversarial Approach for Explainable AI in Intrusion Detection Systems , 2018, IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society.
[35] L. Shapley. A Value for n-person Games , 1988 .
[36] William J. Clancey,et al. Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI , 2019, ArXiv.
[37] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[38] Oluwasanmi Koyejo,et al. Examples are not enough, learn to criticize! Criticism for Interpretability , 2016, NIPS.
[39] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[40] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[41] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[42] Margo I. Seltzer,et al. Scalable Bayesian Rule Lists , 2016, ICML.
[43] Klaus-Robert Müller,et al. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models , 2017, ArXiv.
[44] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[45] B. Chandrasekaran,et al. Explaining control strategies in problem solving , 1989, IEEE Expert.
[46] Quanshi Zhang,et al. Visual interpretability for deep learning: a survey , 2018, Frontiers of Information Technology & Electronic Engineering.
[47] Seth Flaxman,et al. EU regulations on algorithmic decision-making and a "right to explanation" , 2016, ArXiv.
[48] Bogdan E. Popescu,et al. PREDICTIVE LEARNING VIA RULE ENSEMBLES , 2008, 0811.1679.
[49] Jing Liu,et al. Explaining the Attributes of a Deep Learning Based Intrusion Detection System for Industrial Control Networks , 2020, Sensors.
[50] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[51] Jarke J. van Wijk,et al. Instance-Level Explanations for Fraud Detection: A Case Study , 2018, ICML 2018.
[52] Scott Lundberg,et al. An unexpected unity among methods for interpreting model predictions , 2016, ArXiv.
[53] Amit Dhurandhar,et al. TIP: Typifying the Interpretability of Procedures , 2017, ArXiv.
[54] Przemyslaw Biecek,et al. Explanations of model predictions with live and breakDown packages , 2018, R J..