Instance-Wise Causal Feature Selection Explainer for Rotating Machinery Fault Diagnosis

Artificial neural networks in prognostics and health management (PHM), especially in intelligent fault diagnosis (IFD) have made great progress but possess black-box nature, leading to lack of interpretability and weak robustness when facing complex environment variations. When environment changes, the model tends to make wrong decisions leading to a cost, especially for major equipment if easily trusted by the users. Researchers have made studies on eXplainable Artificial Intelligence (XAI) based IFD to better understand the models. Most of them express their interpretability in the way of drawing gradient-based saliency maps to show where the model focuses on, which is of little consideration for causal effect and not sparse enough without quantitative metrics. To address these issues, we design an XAI method that utilizes a neural network as an instance-wise feature selector to select frequency bands that have stronger causal strength with the diagnosis result than others and further explain the diagnosis model. We quantify causal strength with the relative entropy distance (RED) and treat the simplified RED as the objective function for the optimization of the selector model. Finally, our experiments demonstrate the superiority of our method over another algorithm L2X measured by post-hoc accuracy (PHA), variant average causal effect (ACE), and vision plots.

[1]  Y. Zi,et al.  Causal Consistency Network: A Collaborative Multimachine Generalization Method for Bearing Fault Diagnosis , 2023, IEEE Transactions on Industrial Informatics.

[2]  Jian Liang,et al.  Causality Inspired Representation Learning for Domain Generalization , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  S. Vadera,et al.  A Deep Explainable Model for Fault Prediction Using IoT Sensors , 2022, IEEE Access.

[4]  Y. Zi,et al.  Causal Disentanglement: A Generalized Bearing Fault Diagnostic Framework in Continuous Degradation Mode , 2021, IEEE Transactions on Neural Networks and Learning Systems.

[5]  Jie Liu,et al.  High-speed train fault detection with unsupervised causality-based feature extraction methods , 2021, Adv. Eng. Informatics.

[6]  Vineeth N Balasubramanian,et al.  Instance-wise Causal Feature Selection for Model Interpretation , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[7]  Peng Cui,et al.  Deep Stable Learning for Out-Of-Distribution Generalization , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Yoshua Bengio,et al.  Towards Causal Representation Learning , 2021, ArXiv.

[9]  Jong-Myon Kim,et al.  Bearing Fault Diagnosis Using Grad-CAM and Acoustic Emission Signals , 2020, Springer Proceedings in Physics.

[10]  Shibin Wang,et al.  Applications of Unsupervised Deep Transfer Learning to Intelligent Fault Diagnosis: A Survey and Comparative Study , 2019, IEEE Transactions on Instrumentation and Measurement.

[11]  [Re] Generative causal explanations of black-box classifiers , 2021 .

[12]  Marissa Connor,et al.  Generative causal explanations of black-box classifiers , 2020, NeurIPS.

[13]  Xing Wang,et al.  How Does Selective Mechanism Improve Self-Attention Networks? , 2020, ACL.

[14]  Ruqiang Yan,et al.  Deep Learning Algorithms for Rotating Machinery Intelligent Diagnosis: An Open Source Benchmark Study , 2020, ISA transactions.

[15]  Xinyu Shao,et al.  Stacked pruning sparse denoising autoencoder based intelligent fault diagnosis of rolling bearings , 2020, Appl. Soft Comput..

[16]  Wenquan Feng,et al.  Knowledge distilling based model compression and feature learning in fault diagnosis , 2020, Appl. Soft Comput..

[17]  Ching-Hung Lee,et al.  Vibration Signals Analysis by Explainable Artificial Intelligence (XAI) Approach: Application on Bearing Faults Diagnosis , 2020, IEEE Access.

[18]  Guang-Zhong Yang,et al.  XAI—Explainable artificial intelligence , 2019, Science Robotics.

[19]  Walter Karlen,et al.  CXPlain: Causal Explanations for Model Interpretation under Uncertainty , 2019, NeurIPS.

[20]  Serkan Kiranyaz,et al.  A Generic Intelligent Bearing Fault Diagnosis System Using Compact Adaptive 1D CNN Classifier , 2018, Journal of Signal Processing Systems.

[21]  Mihaela van der Schaar,et al.  INVASE: Instance-wise Variable Selection using Neural Networks , 2018, ICLR.

[22]  Chuang Sun,et al.  Explainable Convolutional Neural Network for Gearbox Fault Diagnosis , 2019, Procedia CIRP.

[23]  Fan Xu,et al.  Roller bearing fault diagnosis using stacked denoising autoencoder in deep learning and Gath-Geva clustering algorithm without principal component analysis and data label , 2018, Appl. Soft Comput..

[24]  Been Kim,et al.  Sanity Checks for Saliency Maps , 2018, NeurIPS.

[25]  Le Song,et al.  Learning to Explain: An Information-Theoretic Perspective on Model Interpretation , 2018, ICML.

[26]  Abhishek Das,et al.  Grad-CAM: Why did you say that? , 2016, ArXiv.

[27]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[28]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[29]  Moritz Grosse-Wentrup,et al.  Quantifying causal influences , 2012, 1203.6502.

[30]  V. Rai,et al.  Bearing fault diagnosis using FFT of intrinsic mode functions in Hilbert-Huang transform , 2007 .

[31]  Liqing Zhang,et al.  Saliency Detection: A Spectral Residual Approach , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[32]  D. Strickland,et al.  LRP: a multifunctional scavenger and signaling receptor. , 2001, The Journal of clinical investigation.

[33]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[34]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[35]  Geoffrey E. Hinton,et al.  Learning representations by back-propagation errors, nature , 1986 .