Opportunities for Explainable Artificial Intelligence in Aerospace Predictive Maintenance
暂无分享,去创建一个
[1] Jimeng Sun,et al. RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism , 2016, NIPS.
[2] Luciano Floridi,et al. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .
[3] Goran Nenadic,et al. Wind Turbine operational state prediction: towards featureless, end-to-end predictive maintenance , 2019, 2019 IEEE International Conference on Big Data (Big Data).
[4] Jaime S. Cardoso,et al. Machine Learning Interpretability: A Survey on Methods and Metrics , 2019, Electronics.
[5] Alessandro Rinaldo,et al. Distribution-Free Predictive Inference for Regression , 2016, Journal of the American Statistical Association.
[6] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[7] Oluwasanmi Koyejo,et al. Examples are not enough, learn to criticize! Criticism for Interpretability , 2016, NIPS.
[8] Franco Turini,et al. Local Rule-Based Explanations of Black Box Decision Systems , 2018, ArXiv.
[9] Geoffrey E. Hinton,et al. Distilling a Neural Network Into a Soft Decision Tree , 2017, CEx@AI*IA.
[10] Artur S. d'Avila Garcez,et al. Logic Tensor Networks for Semantic Image Interpretation , 2017, IJCAI.
[11] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[12] Klaus-Robert Müller,et al. Explainable artificial intelligence , 2017 .
[13] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[14] Arthur H. A. Melani,et al. Equipment failure prediction based on neural network analysis incorporating maintainers inspection findings , 2017, 2017 Annual Reliability and Maintainability Symposium (RAMS).
[15] Bernhard Haslhofer,et al. Predicting Time-to-Failure of Plasma Etching Equipment using Machine Learning , 2019, 2019 IEEE International Conference on Prognostics and Health Management (ICPHM).
[16] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[17] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[18] L. M. Lieberman,et al. What If … , 1983, Journal of learning disabilities.
[19] Bob L. Sturm,et al. Local Interpretable Model-Agnostic Explanations for Music Content Analysis , 2017, ISMIR.
[20] M. Elliot. What is Explainable AI , 2018 .
[21] Sercan Ömer Arik,et al. Decision Input Candidate database O utput Matching losses Sparsity regularization Prototypes Thresholding C onfidence Prototype label match ⍺ , 2019 .
[22] Minsuk Kahng,et al. ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models , 2017, IEEE Transactions on Visualization and Computer Graphics.
[23] nasa,et al. Research and technology goals and objectives for Integrated Vehicle Health Management (IVHM) , 2019 .
[24] Agustí Verde Parera,et al. General data protection regulation , 2018 .
[25] Ray Pugh,et al. Operations and Maintenance Best Practices--A Guide to Achieving Operational Efficiency , 2002 .
[26] Cynthia Rudin,et al. Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains its Predictions , 2017, AAAI.
[27] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[28] Svetha Venkatesh,et al. $\mathtt {Deepr}$: A Convolutional Net for Medical Records , 2016, IEEE Journal of Biomedical and Health Informatics.
[29] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.