Unveiling MIMETIC: Interpreting Deep Learning Traffic Classifiers via XAI Techniques

The widespread use of powerful mobile devices has deeply affected the mix of traffic traversing both the Internet and enterprise networks (with bring-your-own-device policies). Traffic encryption has become extremely common, and the quick proliferation of mobile apps and their simple distribution and update have created a specifically challenging scenario for traffic classification and its uses, especially network-security related ones. The recent rise of Deep Learning (DL) has responded to this challenge, by providing a solution to the time-consuming and human-limited handcrafted feature design, and better clas-sification performance. The counterpart of the advantages is the lack of interpretability of these black-box approaches, limiting or preventing their adoption in contexts where the reliability of results, or interpretability of polices is necessary. To cope with these limitations, eXplainable Artificial Intelligence (XAI) techniques have seen recent intensive research. Along these lines, our work applies XAI-based techniques (namely, Deep SHAP) to interpret the behavior of a state-of-the-art multimodal DL traffic classifier. As opposed to common results seen in XAI, we aim at a global interpretation, rather than sample-based ones. The results quantify the importance of each modality (payload- or header-based), and of specific subsets of inputs (e.g., TLS SNI and TCP Window Size) in determining the classification outcome, down to per-class (viz. application) level. The analysis is based on a publicly-released recent dataset focused on mobile app traffic.

[1]  Giuseppe Aceto,et al.  MIMETIC: Mobile encrypted traffic classification using multimodal deep learning , 2019, Comput. Networks.

[2]  Giuseppe Aceto,et al.  Characterization and Prediction of Mobile-App Traffic Using Markov Modeling , 2021, IEEE Transactions on Network and Service Management.

[3]  Hani Hagras,et al.  Toward Human-Understandable, Explainable AI , 2018, Computer.

[4]  Giuseppe Aceto,et al.  Mobile Encrypted Traffic Classification Using Deep Learning: Experimental Evaluation, Lessons Learned, and Challenges , 2019, IEEE Transactions on Network and Service Management.

[5]  Dario Rossi,et al.  Opening the Deep Pandora Box: Explainable Traffic Classification , 2020, IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS).

[6]  Xin Wang,et al.  Real Network Traffic Collection and Deep Learning for Mobile App Identification , 2020, Wirel. Commun. Mob. Comput..

[7]  Giuseppe Aceto,et al.  MIRAGE: Mobile-app Traffic Capture and Ground-truth Creation , 2019, 2019 4th International Conference on Computing, Communications and Security (ICCCS).

[8]  Hongzi Mao,et al.  Interpreting Deep Learning-Based Networking Systems , 2019, SIGCOMM.

[9]  Giuseppe Aceto,et al.  Toward effective mobile encrypted traffic classification through deep learning , 2020, Neurocomputing.

[10]  Scott Lundberg,et al.  A Unified Approach to Interpreting Model Predictions , 2017, NIPS.

[11]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[12]  Graham W. Taylor,et al.  Deep Multimodal Learning: A Survey on Recent Advances and Trends , 2017, IEEE Signal Processing Magazine.

[13]  Klaus-Robert Müller,et al.  Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications , 2021, Proceedings of the IEEE.

[14]  Alexander Binder,et al.  On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.

[15]  Marco Canini,et al.  Cracking Open the Black Box: What Observations Can Tell Us About Reinforcement Learning Agents , 2019, NetAI@SIGCOMM.

[16]  L. Shapley A Value for n-person Games , 1988 .

[17]  Yuedong Xu,et al.  Demystifying Deep Learning in Networking , 2018, APNet '18.

[18]  Avanti Shrikumar,et al.  Learning Important Features Through Propagating Activation Differences , 2017, ICML.

[19]  Milos Manic,et al.  Toward Explainable Deep Neural Network Based Anomaly Detection , 2018, 2018 11th International Conference on Human System Interaction (HSI).

[20]  Marco Mellia,et al.  EXPLAIN-IT: Towards Explainable AI for Unsupervised Network Traffic Analysis , 2019, Big-DAMA@CoNEXT.