Additive Explanations for Anomalies Detected from Multivariate Temporal Data

Detecting anomalies from high-dimensional multivariate temporal data is challenging, because of the non-linear, complex relationships between signals. Recently, deep learning methods based on autoencoders have been shown to capture these relationships and accurately discern between normal and abnormal patterns of behavior, even in fully unsupervised scenarios. However, validating the anomalies detected is difficult without additional explanations. In this paper, we extend SHAP -- a unified framework for providing additive explanations, previously applied for supervised models -- with influence weighting, in order to explain anomalies detected from multivariate time series with a GRU-based autoencoder. Namely, we extract the signals that contribute most to an anomaly and those that counteract it. We evaluate our approach on two use cases and show that we can generate insightful explanations for both single and multiple anomalies.