TSXplain: Demystification of DNN Decisions for Time-Series using Natural Language and Statistical Features

Neural networks (NN) are considered as black-boxes due to the lack of explainability and transparency of their decisions. This significantly hampers their deployment in environments where explainability is essential along with the accuracy of the system. Recently, significant efforts have been made for the interpretability of these deep networks with the aim to open up the black-box. However, most of these approaches are specifically developed for visual modalities. In addition, the interpretations provided by these systems require expert knowledge and understanding for intelligibility. This indicates a vital gap between the explainability provided by the systems and the novice user. To bridge this gap, we present a novel framework i.e. Time-Series eXplanation (TSXplain) system which produces a natural language based explanation of the decision taken by a NN. It uses the extracted statistical features to describe the decision of a NN, merging the deep learning world with that of statistics. The two-level explanation provides ample description of the decision made by the network to aid an expert as well as a novice user alike. Our survey and reliability assessment test confirm that the generated explanations are meaningful and correct. We believe that generating natural language based descriptions of the network's decisions is a big step towards opening up the black-box.

[1]  Christoph Bergmeir,et al.  Forecasting across time series databases using recurrent neural networks on groups of similar series: A clustering approach , 2017, Expert Syst. Appl..

[2]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[3]  Andreas Dengel,et al.  Data Analytics: Industrial Perspective & Solutions for Streaming Data , 2018 .

[4]  R. M. Lark,et al.  A time series model of daily milk yields and its possible use for detection of a disease (ketosis) , 1999 .

[5]  Alok Kumar Singh Kushwaha and Jagwinder Kaur Dhillon Chandni Deep Learning Trends for Video Based Activity Recognition: A Survey , 2018 .

[6]  Tom M. Mitchell,et al.  Joint Concept Learning and Semantic Parsing from Natural Language Explanations , 2017, EMNLP.

[7]  B. Crabtree,et al.  The individual over time: time series applications in health care research. , 1990, Journal of clinical epidemiology.

[8]  Bolei Zhou,et al.  Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Daniel Roggen,et al.  Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition , 2016, Sensors.

[10]  Yi Zheng,et al.  Time Series Classification Using Multi-Channels Deep Convolutional Neural Networks , 2014, WAIM.

[11]  Tommi S. Jaakkola,et al.  Towards Robust Interpretability with Self-Explaining Neural Networks , 2018, NeurIPS.

[12]  Andreas Dengel,et al.  TSViz: Demystification of Deep Learning Models for Time-Series Analysis , 2018, IEEE Access.

[13]  Angshul Majumdar,et al.  Deep Sparse Coding for Non–Intrusive Load Monitoring , 2018, IEEE Transactions on Smart Grid.

[14]  Donald C. Wunsch,et al.  Neural network explanation using inversion , 2007, Neural Networks.

[15]  Luís Torgo,et al.  OpenML: networked science in machine learning , 2014, SKDD.

[16]  Efraim Turban,et al.  Neural Networks in Finance and Investing: Using Artificial Intelligence to Improve Real-World Performance , 1992 .

[17]  Connor Anderson,et al.  Neural Network Interpretation via Fine Grained Textual Summarization , 2018, ArXiv.

[18]  Alexander Binder,et al.  On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.

[19]  C.-C. Jay Kuo,et al.  Interpretable Convolutional Neural Networks via Feedforward Design , 2018, J. Vis. Commun. Image Represent..

[20]  Andreas Dengel,et al.  FuseAD: Unsupervised Anomaly Detection in Streaming Sensors Data by Fusing Statistical and Deep Learning Models , 2019, Sensors.

[21]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.

[22]  Je-Won Kang,et al.  Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security , 2016, PloS one.

[23]  Joachim Diederich,et al.  Survey and critique of techniques for extracting rules from trained artificial neural networks , 1995, Knowl. Based Syst..

[24]  Andrea Vedaldi,et al.  Understanding deep image representations by inverting them , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[25]  Andreas Dengel,et al.  DeepAnT: A Deep Learning Approach for Unsupervised Anomaly Detection in Time Series , 2019, IEEE Access.

[26]  Shaowen Wang,et al.  A high-performance and in-season classification system of field-level crop types using time-series Landsat data and a machine learning approach , 2018, Remote Sensing of Environment.

[27]  Hod Lipson,et al.  Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.

[28]  Trevor Darrell,et al.  Generating Visual Explanations , 2016, ECCV.

[29]  Xiaozhe Wang,et al.  Characteristic-Based Clustering for Time Series Data , 2006, Data Mining and Knowledge Discovery.

[30]  Rob J. Hyndman,et al.  Large-Scale Unusual Time Series Detection , 2015, 2015 IEEE International Conference on Data Mining Workshop (ICDMW).

[31]  Trevor Darrell,et al.  Textual Explanations for Self-Driving Vehicles , 2018, ECCV.

[32]  Markus Goldstein,et al.  Anomaly Detection in Large Datasets , 2014 .

[33]  Christopher Ré,et al.  Training Classifiers with Natural Language Explanations , 2018, ACL.