Securing Machine Learning in the Cloud: A Systematic Review of Cloud Machine Learning Security

With the advances in machine learning (ML) and deep learning (DL) techniques, and the potency of cloud computing in offering services efficiently and cost-effectively, Machine Learning as a Service (MLaaS) cloud platforms have become popular. In addition, there is increasing adoption of third-party cloud services for outsourcing training of DL models, which requires substantial costly computational resources (e.g., high-performance graphics processing units (GPUs)). Such widespread usage of cloud-hosted ML/DL services opens a wide range of attack surfaces for adversaries to exploit the ML/DL system to achieve malicious goals. In this article, we conduct a systematic evaluation of literature of cloud-hosted ML/DL models along both the important dimensions—attacks and defenses—related to their security. Our systematic review identified a total of 31 related articles out of which 19 focused on attack, six focused on defense, and six focused on both attack and defense. Our evaluation reveals that there is an increasing interest from the research community on the perspective of attacking and defending different attacks on Machine Learning as a Service platforms. In addition, we identify the limitations and pitfalls of the analyzed articles and highlight open research issues that require further investigation.

[1]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[2]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Mahesh K. Marina,et al.  Examining Machine Learning for 5G and Beyond Through an Adversarial Lens , 2020, IEEE Internet Computing.

[4]  Wei You,et al.  Cracking Classifiers for Evasion: A Case Study on the Google's Phishing Pages Filter , 2016, WWW.

[5]  Mario Fritz,et al.  ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models , 2018, NDSS.

[6]  Dawn Xiaodong Song,et al.  Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.

[7]  Luigi V. Mancini,et al.  Evasion Attacks Against Watermarking Techniques found in MLaaS Systems , 2019, 2019 Sixth International Conference on Software Defined Systems (SDS).

[8]  Alberto Ferreira de Souza,et al.  Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).

[9]  Xiaoqian Jiang,et al.  SecureLR: Secure Logistic Regression Model via a Hybrid Cryptographic Protocol , 2019, IEEE/ACM Transactions on Computational Biology and Bioinformatics.

[10]  Xiaogang Wang,et al.  Deep Learning Face Representation from Predicting 10,000 Classes , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[11]  Ting Wang,et al.  Model-Reuse Attacks on Deep Learning Systems , 2018, CCS.

[12]  Zhenkai Liang,et al.  Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment , 2019, CCS.

[13]  Andrew Zisserman,et al.  Deep Face Recognition , 2015, BMVC.

[14]  Fan Zhang,et al.  Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.

[15]  Daniel Bernau,et al.  Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models , 2019, Proc. Priv. Enhancing Technol..

[16]  Pan He,et al.  Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[17]  Robert Nikolai Reith,et al.  Efficiently Stealing your Machine Learning Models , 2019, WPES@CCS.

[18]  Dinh Thai Hoang,et al.  Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning , 2020, ArXiv.

[19]  Clément Farabet,et al.  Torch7: A Matlab-like Environment for Machine Learning , 2011, NIPS 2011.

[20]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[21]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[22]  Siddharth Garg,et al.  BadNets: Evaluating Backdooring Attacks on Deep Neural Networks , 2019, IEEE Access.

[23]  Tam N. Nguyen,et al.  Attacking Machine Learning models as part of a cyber kill chain , 2017, ArXiv.

[24]  Matthias Bethge,et al.  Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.

[25]  Sagar Sharma,et al.  Image Disguising for Privacy-preserving Deep Learning , 2018, CCS.

[26]  Dongqi Han,et al.  Practical Traffic-space Adversarial Attacks on Learning-based NIDSs , 2020, ArXiv.

[27]  Yang Liu,et al.  Advanced evasion attacks and mitigations on practical ML‐based phishing website classifiers , 2020, Int. J. Intell. Syst..

[28]  Yanjiao Chen,et al.  Backdoor Attacks and Defenses for Deep Neural Networks in Outsourced Cloud Environments , 2020, IEEE Network.

[29]  Sencun Zhu,et al.  Server-Based Manipulation Attacks Against Machine Learning Models , 2018, CODASPY.

[30]  Bo Li,et al.  Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach , 2017, Comput. Secur..

[31]  Radha Poovendran,et al.  Attacking Automatic Video Analysis Algorithms: A Case Study of Google Cloud Video Intelligence API , 2017, MPS@CCS.

[32]  Junaid Qadir,et al.  The Adversarial Machine Learning Conundrum: Can the Insecurity of ML Become the Achilles' Heel of Cognitive Networks? , 2019, IEEE Network.

[33]  Mehmed M. Kantardzic,et al.  Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains , 2017, Neurocomputing.

[34]  Michele Colajanni,et al.  Addressing Adversarial Attacks Against Security Systems Based on Machine Learning , 2019, 2019 11th International Conference on Cyber Conflict (CyCon).

[35]  Tian Liu,et al.  FDA$^3$: Federated Defense Against Adversarial Attacks for Cloud-Based IIoT Applications , 2020, IEEE Transactions on Industrial Informatics.

[36]  Vijay Arya,et al.  Model Extraction Warning in MLaaS Paradigm , 2017, ACSAC.

[37]  Philip S. Yu,et al.  Not Just Privacy: Improving Performance of Private Deep Learning in Mobile Cloud , 2018, KDD.

[38]  Junaid Qadir,et al.  Black-box Adversarial Machine Learning Attack on Network Traffic Classification , 2019, 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC).

[39]  Farinaz Koushanfar,et al.  ReDCrypt: Real-Time Privacy-Preserving Deep Learning Inference in Clouds Using FPGAs , 2018, ACM Trans. Reconfigurable Technol. Syst..

[40]  Gabriele Costa,et al.  WAF-A-MoLE: evading web application firewalls through adversarial machine learning , 2020, SAC.

[41]  Junaid Qadir,et al.  Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and the Way Forward , 2019, IEEE Communications Surveys & Tutorials.

[42]  Nikhil Joshi,et al.  GDALR: An Efficient Model Duplication Attack on Black Box Machine Learning Models , 2019, 2019 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN).

[43]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[44]  Tao Liu,et al.  SIN2: Stealth infection on neural network — A low-cost agile neural Trojan attack methodology , 2018, 2018 IEEE International Symposium on Hardware Oriented Security and Trust (HOST).

[45]  Magnus Nyström,et al.  Adversarial Machine Learning - Industry Perspectives , 2020, SSRN Electronic Journal.

[46]  Ajmal Mian,et al.  Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.

[47]  Ben Y. Zhao,et al.  With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning , 2018, USENIX Security Symposium.

[48]  Junaid Qadir,et al.  Secure and Robust Machine Learning for Healthcare: A Survey , 2020, IEEE Reviews in Biomedical Engineering.

[49]  Jiajie Zhang,et al.  Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems , 2019, SCC '19.

[50]  Christian Poellabauer,et al.  Real-Time Adversarial Attacks , 2019, IJCAI.

[51]  Prateek Mittal,et al.  Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples , 2019, ArXiv.

[52]  Tom Goldstein,et al.  Adversarial attacks on Copyright Detection Systems , 2019, ICML.

[53]  Hassan Takabi,et al.  Privacy-preserving Machine Learning in Cloud , 2017, CCSW.

[54]  Yao Lu,et al.  Oblivious Neural Network Predictions via MiniONN Transformations , 2017, IACR Cryptol. ePrint Arch..