Defeating DNN-Based Traffic Analysis Systems in Real-Time With Blind Adversarial Perturbations

Deep neural networks (DNNs) are commonly used for various traffic analysis problems, such as website fingerprinting and flow correlation, as they outperform traditional (e.g., statistical) techniques by large margins. However, deep neural networks are known to be vulnerable to adversarial examples: adversarial inputs to the model that get labeled incorrectly by the model due to small adversarial perturbations. In this paper, for the first time, we show that an adversary can defeat DNN-based traffic analysis techniques by applying adversarial perturbations on the patterns of live network traffic. Applying adversarial perturbations (examples) on traffic analysis classifiers faces two major challenges. First, the perturbing party (i.e., the adversary) should be able to apply the adversarial network perturbations on live traffic, with no need to buffering traffic or having some prior knowledge about upcoming network packets. We design a systematic approach to create adversarial perturbations that are independent of their target network connections, and therefore can be applied in real-time on live traffic. We therefore call such adversarial perturbations blind. Second, unlike image classification applications, perturbing traffic features is not straight-forward as this needs to be done while preserving the correctness of dependent traffic features. We address this challenge by introducing remapping functions that we use to enforce different network constraints while creating blind adversarial perturbations. Our blind adversarial perturbations algorithm is generic and can be applied on various types of traffic classifiers. We demonstrate this by implementing a Tor pluggable transport that applies adversarial perturbations on live Tor connections to defeat DNN-based website fingerprinting and flow correlation techniques, the two most-studied types of traffic analysis. We show that our blind adversarial perturbations are even transferable between different models and architectures, so they can be applied by blackbox adversaries. Finally, we show that existing countermeasures perform poorly against blind adversarial perturbations, therefore, we introduce a tailored countermeasure.

[1]  Mohsen Imani,et al.  Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks With Adversarial Traces , 2019, IEEE Transactions on Information Forensics and Security.

[2]  Tao Wang,et al.  Effective Attacks and Provable Defenses for Website Fingerprinting , 2014, USENIX Security Symposium.

[3]  Wouter Joosen,et al.  Automated Website Fingerprinting through Deep Learning , 2017, NDSS.

[4]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[5]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[6]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[7]  Andrew Slavin Ross,et al.  Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.

[8]  Milad Nasr,et al.  DeepCorr: Strong Flow Correlation Attacks on Tor Using Deep Learning , 2018, CCS.

[9]  Matthew K. Wright,et al.  Timing Attacks in Low-Latency Mix Systems (Extended Abstract) , 2004, Financial Cryptography.

[10]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[11]  Mohsen Imani,et al.  Deep Fingerprinting: Undermining Website Fingerprinting Defenses with Deep Learning , 2018, CCS.

[12]  Dennis Goeckel,et al.  Practical Traffic Analysis Attacks on Secure Messaging Applications , 2020, NDSS.

[13]  Jun Zhu,et al.  Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[14]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Riccardo Bettati,et al.  On Flow Correlation Attacks and Countermeasures in Mix Networks , 2004, Privacy Enhancing Technologies.

[16]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[17]  Tao Wang,et al.  Walkie-Talkie: An Efficient Defense Against Passive Website Fingerprinting Attacks , 2017, USENIX Security Symposium.

[18]  Kouichi Sakurai,et al.  One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.

[19]  Nikita Borisov,et al.  Non-Blind Watermarking of Network Flows , 2012, IEEE/ACM Transactions on Networking.

[20]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[21]  Tao Wang,et al.  Improved website fingerprinting on Tor , 2013, WPES.

[22]  Tao Wang,et al.  On Realistically Attacking Tor with Website Fingerprinting , 2016, Proc. Priv. Enhancing Technol..

[23]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[24]  Xiaoyu Cao,et al.  Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification , 2017, ACSAC.

[25]  Patrick Cardinal,et al.  Universal Adversarial Audio Perturbations , 2019, ArXiv.

[26]  Xiang Cai,et al.  CS-BuFLO: A Congestion Sensitive Website Fingerprinting Defense , 2014, WPES.

[27]  Nikita Borisov,et al.  RAINBOW: A Robust And Invisible Non-Blind Watermark for Network Flows , 2009, NDSS.

[28]  Yin Zhang,et al.  Detecting Stepping Stones , 2000, USENIX Security Symposium.

[29]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[30]  Tom Chothia,et al.  A Statistical Test for Information Leaks Using Continuous Mutual Information , 2011, CSF.

[31]  Jinfeng Yi,et al.  EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples , 2017, AAAI.

[32]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[33]  Mike Perry,et al.  Toward an Efficient Website Fingerprinting Defense , 2015, ESORICS.

[34]  Dan Boneh,et al.  Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.

[35]  David Wagner,et al.  Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.

[36]  Claudia Díaz,et al.  Inside Job: Applying Traffic Analysis to Measure Tor from Within , 2018, NDSS.

[37]  Brijesh Joshi,et al.  Touching from a distance: website fingerprinting attacks and defenses , 2012, CCS.

[38]  Srinivas Devadas,et al.  Var-CNN and DynaFlow: Improved Attacks and Defenses for Website Fingerprinting , 2018, ArXiv.

[39]  Dawn Xiaodong Song,et al.  Detection of Interactive Stepping Stones: Algorithms and Confidence Bounds , 2004, RAID.

[40]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[41]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[42]  Lorenzo Cavallaro,et al.  Intriguing Properties of Adversarial ML Attacks in the Problem Space , 2019, 2020 IEEE Symposium on Security and Privacy (SP).

[43]  Nikita Borisov,et al.  SWIRL: A Scalable Watermark to Detect Correlated Network Flows , 2011, NDSS.

[44]  Giovanni Cherubin,et al.  Website Fingerprinting Defenses at the Application Layer , 2017, Proc. Priv. Enhancing Technol..

[45]  Arya Mazumdar,et al.  Compressive Traffic Analysis: A New Paradigm for Scalable Traffic Analysis , 2017, CCS.

[46]  Samy Bengio,et al.  Adversarial Machine Learning at Scale , 2016, ICLR.

[47]  Tao Wang,et al.  High Precision Open-World Website Fingerprinting , 2020, 2020 IEEE Symposium on Security and Privacy (SP).

[48]  Mohammad Saidur Rahman,et al.  Triplet Fingerprinting: More Practical and Portable Website Fingerprinting with N-shot Learning , 2019, CCS.

[49]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[50]  George Danezis,et al.  Learning Universal Adversarial Perturbations with Generative Models , 2017, 2018 IEEE Security and Privacy Workshops (SPW).

[51]  Thomas Engel,et al.  Website fingerprinting in onion routing based anonymization networks , 2011, WPES.

[52]  Dawn Xiaodong Song,et al.  Decision Boundary Analysis of Adversarial Examples , 2018, ICLR.

[53]  R. Dingledine,et al.  Design of a blocking-resistant anonymity system , 2006 .

[54]  Srinivas Devadas,et al.  Var-CNN: A Data-Efficient Website Fingerprinting Attack Based on Deep Learning , 2018, Proc. Priv. Enhancing Technol..

[55]  George Danezis,et al.  k-fingerprinting: A Robust Scalable Website Fingerprinting Technique , 2015, USENIX Security Symposium.

[56]  Klaus Wehrle,et al.  Website Fingerprinting at Internet Scale , 2016, NDSS.

[57]  Michael K. Reiter,et al.  Statistical Privacy for Streaming Traffic , 2019, NDSS.

[58]  Seyed-Mohsen Moosavi-Dezfooli,et al.  Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[59]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[60]  Patrick D. McDaniel,et al.  Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.

[61]  Prateek Mittal,et al.  RAPTOR: Routing Attacks on Privacy in Tor , 2015, USENIX Security Symposium.