Fast and Efficient Decision-Based Attack for Deep Neural Network on Edge

Deep Neural Networks (DNN) are very effective in high performance applications such as computer vision, natural language processing and speech recognition. However, these networks are vulnerable to adversarial attacks that infuses perturbations in the input data which are imperceptible to human eyes. In this paper, we propose a novel decision-based targeted adversarial attack algorithm which exposes the vulnerability of the underlying DNN when implemented on a resource constrained computing edge. Experimental results show that the proposed model performs 4 seconds(s) faster on an average, in a single perturbed image generation than the state of the art RED-attack, while consuming 15% less time for the entire dataset.

[1]  Wen-Zhan Song,et al.  PoTrojan: powerful neural-level trojan designs in deep learning models , 2018, ArXiv.

[2]  Muhammad Shafique,et al.  RED-Attack: Resource Efficient Decision based Attack for Machine Learning , 2019, ArXiv.

[3]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[4]  Jun Zhu,et al.  Improving Black-box Adversarial Attacks with a Transfer-based Prior , 2019, NeurIPS.

[5]  Yang Song,et al.  Constructing Unrestricted Adversarial Examples with Generative Models , 2018, NeurIPS.

[6]  Jie Yang,et al.  Adversarial Attack Type I: Cheat Classifiers by Significant Changes , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Jinfeng Yi,et al.  ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.

[8]  Fabio Roli,et al.  Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.

[9]  James Bailey,et al.  Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets , 2020, ICLR.

[10]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[11]  Nenghai Yu,et al.  Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[12]  Ian S. Fischer,et al.  Adversarial Transformation Networks: Learning to Generate Adversarial Examples , 2017, ArXiv.

[13]  Muhammad Shafique,et al.  FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning , 2018, 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[14]  Arno Blaas,et al.  BayesOpt Adversarial Attack , 2020, ICLR.

[15]  Seyed-Mohsen Moosavi-Dezfooli,et al.  DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[16]  Matthias Bethge,et al.  Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.

[17]  Patrick D. McDaniel,et al.  Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.

[18]  Aleksander Madry,et al.  Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors , 2018, ICLR.

[19]  Muhammad Shafique,et al.  TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks , 2019, 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS).

[20]  Hu Zhang,et al.  Query-efficient Meta Attack to Deep Neural Networks , 2019, ICLR.

[21]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[22]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[23]  Qi Wei,et al.  Hu-Fu: Hardware and Software Collaborative Attack Framework Against Neural Networks , 2018, 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI).

[24]  Johannes Stallkamp,et al.  Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition , 2012, Neural Networks.

[25]  Ajmal Mian,et al.  Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.

[26]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[27]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.