Towards Multiple Black-boxes Attack via Adversarial Example Generation Network

The current research on adversarial attacks aims at a single model while the research on attacking multiple models simultaneously is still challenging. In this paper, we propose a novel black-box attack method, referred to as MBbA, which can attack multiple black-boxes at the same time. By encoding input image and its target category into an associated space, each decoder seeks the appropriate attack areas from the image through the designed loss functions, and then generates effective adversarial examples. This process realizes end-to-end adversarial example generation without involving substitute models for the black-box scenario. On the other hand, adopting the adversarial examples generated by MBbA for adversarial training, the robustness of the attacked models are greatly improved. More importantly, those adversarial examples can achieve satisfactory attack performance, even if these black-box models are trained with the adversarial examples generated by other black-box attack methods, which show good transferability. Finally, extensive experiments show that compared with other state-of-the-art methods: (1) MBbA takes the least time to obtain the most effective attack effects in multi-black-box attack scenario. Furthermore, MBbA achieves the highest attack success rates in a single black-box attack scenario; (2) the adversarial examples generated by MBbA can effectively improve the robustness of the attacked models and exhibit good transferability.

[1]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[2]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[3]  Jun Zhu,et al.  Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[4]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[5]  Yiwen Guo,et al.  Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks , 2019, NeurIPS.

[6]  Matthias Bethge,et al.  Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.

[7]  Xiaogang Wang,et al.  Deep Learning Face Representation by Joint Identification-Verification , 2014, NIPS.

[8]  Mingyan Liu,et al.  Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.

[9]  Xuelong Dai,et al.  Long-term Cross Adversarial Training: A Robust Meta-learning Method for Few-shot Classification Tasks , 2021, ArXiv.

[10]  Yipeng Liu,et al.  DaST: Data-Free Substitute Training for Adversarial Attacks , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Seyed-Mohsen Moosavi-Dezfooli,et al.  DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Michael I. Jordan,et al.  HopSkipJumpAttack: A Query-Efficient Decision-Based Attack , 2019, 2020 IEEE Symposium on Security and Privacy (SP).

[13]  Jinfeng Yi,et al.  ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.

[14]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[15]  Andrés Montoyo,et al.  Advances on natural language processing , 2007, Data Knowl. Eng..

[16]  Gary R. Bradski,et al.  Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library , 2016 .

[17]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[18]  Yuan Tian,et al.  Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries , 2020, USENIX Security Symposium.

[19]  Baoyuan Wu,et al.  Boosting Decision-Based Black-Box Adversarial Attacks with Random Sign Flip , 2020, ECCV.

[20]  Rama Chellappa,et al.  Attributes for Improved Attributes: A Multi-Task Network Utilizing Implicit and Explicit Relationships for Facial Attribute Classification , 2017, AAAI.

[21]  E. Dong,et al.  An interactive web-based dashboard to track COVID-19 in real time , 2020, The Lancet Infectious Diseases.

[22]  Nenghai Yu,et al.  Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[23]  Wei Liu,et al.  Efficient Decision-Based Black-Box Adversarial Attacks on Face Recognition , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[24]  Luc Van Gool,et al.  Deep Expectation of Real and Apparent Age from a Single Image Without Facial Landmarks , 2016, International Journal of Computer Vision.

[25]  Logan Engstrom,et al.  Black-box Adversarial Attacks with Limited Queries and Information , 2018, ICML.

[26]  Jun Zhao,et al.  AI-GAN: Attack-Inspired Generation of Adversarial Examples , 2020, ArXiv.

[27]  Karl Ricanek,et al.  MORPH: a longitudinal image database of normal adult age-progression , 2006, 7th International Conference on Automatic Face and Gesture Recognition (FGR06).

[28]  Jinfeng Yi,et al.  AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks , 2018, AAAI.

[29]  Fahad Shahbaz Khan,et al.  A Self-supervised Approach for Adversarial Robustness , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[30]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[31]  Nenghai Yu,et al.  Online Multi-object Tracking Using CNN-Based Single Object Tracker with Spatial-Temporal Attention Mechanism , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[32]  Luminita Vasiu,et al.  Biometric Recognition - Security and Privacy Concerns , 2004, ICETE.

[33]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[34]  Dawn Xiaodong Song,et al.  Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.

[35]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[36]  Ya Li,et al.  Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[37]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[38]  Dawn Xiaodong Song,et al.  Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms , 2018, ECCV.

[39]  Xiaogang Wang,et al.  Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).