Exploring Interpretable Predictive Models for Business Processes

There has been a growing interest in the literature on the application of deep learning models for predicting business process behaviour, such as the next event in a case, the time for completion of an event, and the remaining execution trace of a case. Although these models provide high levels of accuracy, their sophisticated internal representations provide little or no understanding about the reason for a particular prediction, resulting in them being used as black-boxes. Consequently, an interpretable model is necessary to enable transparency and empower users to evaluate when and how much they can rely on the models. This paper explores an interpretable and accurate attention-based Long Short Term Memory (LSTM) model for predicting business process behaviour. The interpretable model provides insights into the model inputs influencing a prediction, thus facilitating transparency. An experimental evaluation shows that the proposed model capable of supporting interpretability also provides accurate predictions when compared to existing LSTM models for predicting process behaviour. The evaluation further shows that attention mechanisms in LSTM provide a sound approach to generate meaningful interpretations across different tasks in predictive process analytics.

[1]  Jana-Rebecca Rehse,et al.  Towards Explainable Process Predictions for Industry 4.0 in the DFKI-Smart-Lego-Factory , 2019, KI - Künstliche Intelligenz.

[2]  Xiaoli Z. Fern,et al.  Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference , 2018, EMNLP.

[3]  Fabrizio Maria Maggi,et al.  Survey and Cross-benchmark Comparison of Remaining Time Prediction Methods in Business Process Monitoring , 2019, ACM Trans. Intell. Syst. Technol..

[4]  Cynthia Rudin,et al.  Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.

[5]  Garrison W. Cottrell,et al.  A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction , 2017, IJCAI.

[6]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[7]  Jimeng Sun,et al.  RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism , 2016, NIPS.

[8]  Franco Turini,et al.  A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..

[9]  Jianmin Wang,et al.  MM-Pred: A Deep Predictive Model for Multi-attribute Event Sequence , 2019, SDM.

[10]  Scott Lundberg,et al.  A Unified Approach to Interpreting Model Predictions , 2017, NIPS.

[11]  Marlon Dumas,et al.  Predictive Business Process Monitoring with LSTM Neural Networks , 2016, CAiSE.

[12]  Oscar González Rojas,et al.  Learning Accurate LSTM Models of Business Processes , 2019, BPM.

[13]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[14]  Jana-Rebecca Rehse,et al.  Predicting process behaviour using deep learning , 2016, Decis. Support Syst..

[15]  Jun-Seok Kim,et al.  Interactive Visualization and Manipulation of Attention-based Neural Machine Translation , 2017, EMNLP.

[16]  Noah A. Smith,et al.  Is Attention Interpretable? , 2019, ACL.

[17]  Li Zhao,et al.  Attention-based LSTM for Aspect-level Sentiment Classification , 2016, EMNLP.