暂无分享,去创建一个
Chun Ouyang | Renuka Sindhgatta | Catarina Moreira | Peter Bruza | Yu-Liang Chou | Mythreyi Velmurugan | Renuka Sindhgatta | C. Ouyang | P. Bruza | M. Velmurugan | Yu-Liang Chou | C. Moreira
[1] Peter Norvig,et al. Artificial Intelligence: A Modern Approach , 1995 .
[2] Thomas de Quincey. [C] , 2000, The Works of Thomas De Quincey, Vol. 1: Writings, 1799–1820.
[3] Yujia Zhang,et al. Why should you trust my interpretation? Understanding uncertainty in LIME predictions , 2019, ArXiv.
[4] David Heckerman,et al. A Tutorial on Learning with Bayesian Networks , 1999, Innovations in Bayesian Networks.
[5] Erik Strumbelj,et al. Explaining prediction models and individual predictions with feature contributions , 2014, Knowledge and Information Systems.
[6] Javier Arroyo,et al. Explainability of a Machine Learning Granting Scoring Model in Peer-to-Peer Lending , 2020, IEEE Access.
[7] Zachary C. Lipton,et al. The mythos of model interpretability , 2018, Commun. ACM.
[8] Moshe Y. Vardi. To serve humanity , 2019, Commun. ACM.
[9] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[10] Renuka Sindhgatta,et al. Exploring Interpretability for Predictive Process Analytics , 2020, ICSOC.
[11] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[12] Oluwasanmi Koyejo,et al. Examples are not enough, learn to criticize! Criticism for Interpretability , 2016, NIPS.
[13] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[14] Joaquin Quiñonero Candela,et al. Counterfactual reasoning and learning systems: the example of computational advertising , 2013, J. Mach. Learn. Res..
[15] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[16] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[17] L. Shapley. A Value for n-person Games , 1988 .
[18] Danna Zhou,et al. d. , 1840, Microbial pathogenesis.
[19] M. Hunt,et al. Bayesian networks and decision trees in the diagnosis of female urinary incontinence , 2000, Proceedings of the 22nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Cat. No.00CH37143).
[20] Eric D. Ragan,et al. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems , 2018, ACM Trans. Interact. Intell. Syst..
[21] P. Alam. ‘A’ , 2021, Composites Engineering: An A–Z Guide.
[22] Georg Langs,et al. Causability and explainability of artificial intelligence in medicine , 2019, WIREs Data Mining Knowl. Discov..
[23] David Maxwell Chickering,et al. Learning Bayesian Networks is NP-Complete , 2016, AISTATS.
[24] Allan Tucker,et al. Learning Bayesian networks from big data with greedy search: computational complexity and efficient implementation , 2018, Statistics and Computing.
[25] Dursun Delen,et al. A synthetic informative minority over-sampling (SIMO) algorithm leveraging support vector machine to enhance learning from imbalanced datasets , 2018, Decis. Support Syst..
[26] Elias Chaibub Neto,et al. Towards causality-aware predictions in static machine learning tasks: the linear structural causal model case , 2020, 2001.03998.
[27] Ali Movahedi,et al. Toward safer highways, application of XGBoost and SHAP for real-time accident detection and feature analysis. , 2019, Accident; analysis and prevention.
[28] Suchi Saria,et al. Reliable Decision Support using Counterfactual Models , 2017, NIPS.
[29] Nir Friedman,et al. Probabilistic Graphical Models - Principles and Techniques , 2009 .
[30] Jure Leskovec,et al. Faithful and Customizable Explanations of Black Box Models , 2019, AIES.
[31] Uri Shalit,et al. Learning Representations for Counterfactual Inference , 2016, ICML.
[32] Jinsoo Park,et al. Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information , 2020, Decis. Support Syst..
[33] David Maxwell Chickering,et al. Learning Bayesian networks: The combination of knowledge and statistical data , 1995, Mach. Learn..
[34] Sherif Sakr,et al. Interpretability in HealthCare A Comparative Study of Local Machine Learning Interpretability Techniques , 2019, 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS).
[35] 장윤희,et al. Y. , 2003, Industrial and Labor Relations Terms.
[36] Amit V. Deokar,et al. Disentangling consumer recommendations: Explaining and predicting airline recommendations based on online reviews , 2018, Decis. Support Syst..
[37] P. Alam. ‘S’ , 2021, Composites Engineering: An A–Z Guide.
[38] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[39] Judea Pearl,et al. The seven tools of causal inference, with reflections on machine learning , 2019, Commun. ACM.
[40] Judea Pearl,et al. Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.
[41] Q. Liao,et al. Questioning the AI: Informing Design Practices for Explainable AI User Experiences , 2020, CHI.
[42] Dov M. Gabbay,et al. Advice on Abductive Logic , 2006, Log. J. IGPL.
[43] Chandan Singh,et al. Definitions, methods, and applications in interpretable machine learning , 2019, Proceedings of the National Academy of Sciences.