LSDAT: Low-Rank and Sparse Decomposition for Decision-based Adversarial Attack

We propose LSDAT, an image-agnostic decisionbased black-box attack that exploits low-rank and sparse decomposition (LSD) to dramatically reduce the number of queries and achieve superior fooling rates compared to the state-of-the-art decision-based methods under given imperceptibility constraints. LSDAT crafts perturbations in the low-dimensional subspace formed by the sparse component of the input sample and that of an adversarial sample to obtain query-efficiency. The specific perturbation of interest is obtained by traversing the path between the input and adversarial sparse components. It is set forth that the proposed sparse perturbation is the most aligned sparse perturbation with the shortest path from the input sample to the decision boundary for some initial adversarial sample (the best sparse approximation of shortest path, likely to fool the model). Theoretical analyses are provided to justify the functionality of LSDAT. Unlike other dimensionality reduction based techniques aimed at improving query efficiency (e.g, ones based on FFT), LSD works directly in the image pixel domain to guarantee that non-`2 constraints, such as sparsity, are satisfied. LSD offers better control over the number of queries and provides computational efficiency as it performs sparse decomposition of the input and adversarial images only once to generate all queries. We demonstrate `0, `2 and `∞ bounded attacks with LSDAT to evince its efficiency compared to baseline decision-based attacks in diverse low-query budget scenarios as outlined in the experiments. 1

[1]  Keaton Hamm,et al.  Rapid Robust Principal Component Analysis: CUR Accelerated Inexact Low Rank Estimation , 2020, IEEE Signal Processing Letters.

[2]  Una-May O'Reilly,et al.  Sign Bits Are All You Need for Black-Box Attacks , 2020, ICLR.

[3]  Huaiyu Dai,et al.  GeoDA: A Geometric Framework for Black-Box Adversarial Attacks , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Andrew Gordon Wilson,et al.  Simple Black-box Adversarial Attacks , 2019, ICML.

[5]  Ying Tan,et al.  Black-Box Attacks against RNN based Malware Detection Algorithms , 2017, AAAI Workshops.

[6]  Mani Srivastava,et al.  GenAttack: practical black-box attacks with gradient-free optimization , 2018, GECCO.

[7]  Aleksander Madry,et al.  Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors , 2018, ICLR.

[8]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[9]  Dacheng Tao,et al.  GoDec: Randomized Lowrank & Sparse Matrix Decomposition in Noisy Case , 2011, ICML.

[10]  El-hadi Zahzah,et al.  Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing , 2016 .

[11]  Qi Tian,et al.  Image classification by non-negative sparse coding, low-rank and sparse decomposition , 2011, CVPR 2011.

[12]  Anit Kumar Sahu,et al.  Simple and Efficient Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes , 2021, KDD.

[13]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[14]  J. Zico Kolter,et al.  Fast is better than free: Revisiting adversarial training , 2020, ICLR.

[15]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[16]  Seyed-Mohsen Moosavi-Dezfooli,et al.  SparseFool: A Few Pixels Make a Big Difference , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[17]  Michael I. Jordan,et al.  HopSkipJumpAttack: A Query-Efficient Decision-Based Attack , 2019, 2020 IEEE Symposium on Security and Privacy (SP).

[18]  Lujo Bauer,et al.  Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition , 2018, ArXiv.

[19]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[20]  Lu Yuan,et al.  GreedyFool: Distortion-Aware Sparse Adversarial Attack , 2020, NeurIPS.

[21]  Jinfeng Yi,et al.  Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach , 2018, ICLR.

[22]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[23]  Dawn Xiaodong Song,et al.  Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms , 2018, ECCV.

[24]  Patrick D. McDaniel,et al.  Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.

[25]  Jinfeng Yi,et al.  ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.

[26]  Matthias Hein,et al.  Sparse and Imperceivable Adversarial Attacks , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[27]  Matthias Bethge,et al.  Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.

[28]  Sijia Liu,et al.  On the Design of Black-Box Adversarial Examples by Leveraging Gradient-Free Optimization and Operator Splitting Method , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[29]  Daniel K Sodickson,et al.  Low‐rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components , 2015, Magnetic resonance in medicine.

[30]  Noah Simon,et al.  A Sparse-Group Lasso , 2013 .

[31]  Guoying Zhao,et al.  Background Subtraction Based on Low-Rank and Structured Sparse Decomposition , 2015, IEEE Transactions on Image Processing.

[32]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[33]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[34]  Seyed-Mohsen Moosavi-Dezfooli,et al.  DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[35]  Allen Y. Yang,et al.  Robust Face Recognition via Sparse Representation , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[36]  Jun Zhu,et al.  Improving Black-box Adversarial Attacks with a Transfer-based Prior , 2019, NeurIPS.

[37]  Cho-Jui Hsieh,et al.  Sign-OPT: A Query-Efficient Hard-label Adversarial Attack , 2020, ICLR.

[38]  Jinfeng Yi,et al.  AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks , 2018, AAAI.

[39]  Zenghui Wang,et al.  Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review , 2017, Neural Computation.

[40]  Tong Zhang,et al.  Black-Box Adversarial Attack with Transferable Model-based Embedding , 2020, ICLR.

[41]  Jinghui Chen,et al.  RayS: A Ray Searching Method for Hard-label Adversarial Attack , 2020, KDD.

[42]  Baoyuan Wu,et al.  Boosting Decision-Based Black-Box Adversarial Attacks with Random Sign Flip , 2020, ECCV.

[43]  Logan Engstrom,et al.  Synthesizing Robust Adversarial Examples , 2017, ICML.

[44]  Hadi M. Dolatabadi,et al.  AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows , 2020, NeurIPS.

[45]  Micah Sherr,et al.  Hidden Voice Commands , 2016, USENIX Security Symposium.

[46]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[47]  Nina Narodytska,et al.  Simple Black-Box Adversarial Attacks on Deep Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[48]  Logan Engstrom,et al.  Black-box Adversarial Attacks with Limited Queries and Information , 2018, ICML.

[49]  Shuang Yang,et al.  QEBA: Query-Efficient Boundary-Based Blackbox Attack , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[50]  Baoyuan Wu,et al.  Sparse Adversarial Attack via Perturbation Factorization , 2020, ECCV.

[51]  J. Doug Tygar,et al.  Adversarial machine learning , 2019, AISec '11.

[52]  Dawn Xiaodong Song,et al.  Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.

[53]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[54]  Naman D. Singh,et al.  Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks , 2020, ArXiv.

[55]  Alois Knoll,et al.  Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[56]  David A. Wagner,et al.  Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).

[57]  Ajmal Mian,et al.  Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.

[58]  Farokh Marvasti,et al.  Missing Low-Rank and Sparse Decomposition Based on Smoothed Nuclear Norm , 2020, IEEE Transactions on Circuits and Systems for Video Technology.

[59]  Baohua Zhang,et al.  Multi-focus image fusion based on sparse decomposition and background detection , 2016, Digit. Signal Process..