Integration of Explainable AI and Blockchain for Secure Storage of Human Readable Justifications for Credit Risk Assessment
暂无分享,去创建一个
Ketan Kotecha | Rahee Walambe | Mihir Pandya | Manas Ojha | Ashwin Kolhatkar | Akash Kademani | Sakshi Kathote
[1] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[2] Alexander Binder,et al. Explaining nonlinear classification decisions with deep Taylor decomposition , 2015, Pattern Recognit..
[3] José Luis Navarro,et al. A fuzzy clustering algorithm enhancing local model interpretability , 2007, Soft Comput..
[4] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[5] Paolo Giudici,et al. Explainable Machine Learning in Credit Risk Management , 2019, Computational Economics.
[6] Gerald Fahner,et al. Developing Transparent Credit Risk Scorecards More Effectively: An Explainable Artificial Intelligence Approach , 2018 .
[7] Vinod Sharma,et al. An interpretable neuro-fuzzy approach to stock price forecasting , 2017, Soft Computing.
[8] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[9] Francisco Herrera,et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.
[10] M. Stone. Cross‐Validatory Choice and Assessment of Statistical Predictions , 1976 .
[11] Fran Casino,et al. A systematic literature review of blockchain-based applications: Current status, classification and open issues , 2019, Telematics Informatics.
[12] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[13] Sameer Singh,et al. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier , 2016, NAACL.
[15] Lalana Kagal,et al. Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).
[16] Enrico Bertini,et al. ViCE: visual counterfactual explanations for machine learning models , 2020, IUI.
[17] Martin Wattenberg,et al. How to Use t-SNE Effectively , 2016 .
[18] L. Shapley. A Value for n-person Games , 1988 .
[19] Ankur Taly,et al. Gradients of Counterfactuals , 2016, ArXiv.
[20] Cynthia Rudin,et al. An Interpretable Model with Globally Consistent Explanations for Credit Risk , 2018, ArXiv.
[21] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[22] David J. Hand,et al. Measuring classifier performance: a coherent alternative to the area under the ROC curve , 2009, Machine Learning.
[23] Keun Ho Ryu,et al. Advanced Neural Network Approach, Its Explanation with LIME for Credit Scoring Application , 2019, ACIIDS.
[24] Jimeng Sun,et al. RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism , 2016, NIPS.
[25] Khaled Salah,et al. Blockchain for explainable and trustworthy artificial intelligence , 2019, WIREs Data Mining Knowl. Discov..
[26] Carl Doersch,et al. Tutorial on Variational Autoencoders , 2016, ArXiv.
[27] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.