Towards Practical Certifiable Patch Defense with Vision Transformer

Patch attacks, one of the most threatening forms of physical attack in adversarial examples, can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous region. Certifiable patch defense can guarantee robustness that the classifier is not affected by patch attacks. Existing certifiable patch defenses sacrifice the clean accuracy of classifiers and only obtain a low certified accuracy on toy datasets. Furthermore, the clean and certified accuracy of these methods is still significantly lower than the accuracy of normal classification networks, which limits their application in practice. To move towards a practical certifiable patch defense, we introduce Vision Transformer (ViT) into the framework of Derandomized Smoothing (DS). Specifically, we propose a progressive smoothed image modeling task to train Vision Transformer, which can capture the more discriminable local context of an image while preserving the global semantic information. For efficient inference and deployment in the real world, we innovatively reconstruct the global self-attention structure of the original ViT into isolated band unit self-attention. On ImageNet, under 2% area patch attacks our method achieves 41.70% certified accuracy, a nearly 1-fold increase over the previous best method (26.00%). Simultaneously, our method achieves 78.58% clean accuracy, which is quite close to the normal ResNet-101 accuracy. Extensive experiments show that our method obtains state-of-the-art clean and certified accuracy with inferring efficiently on CIFAR-10 and ImageNet.

[1]  Wenqiang Zhang,et al.  Efficient Universal Shuffle Attack for Visual Object Tracking , 2022, ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[2]  J. Zhang,et al.  Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning , 2022, 2022 IEEE International Conference on Image Processing (ICIP).

[3]  Chao Dong,et al.  Reflash Dropout in Image Super-Resolution , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Shouhong Ding,et al.  Highly Efficient Natural Image Matting , 2021, BMVC.

[5]  Jilin Li,et al.  Detecting Adversarial Patch Attacks through Global-local Consistency , 2021, AdvM @ ACM Multimedia.

[6]  Mofei Song,et al.  Disentangled High Quality Salient Object Detection , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[7]  Guosheng Hu,et al.  DPT: Deformable Patch-based Transformer for Visual Recognition , 2021, ACM Multimedia.

[8]  Li Dong,et al.  BEiT: BERT Pre-Training of Image Transformers , 2021, ICLR.

[9]  Weisi Lin,et al.  CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes , 2021, AAAI.

[10]  Z. Jane Wang,et al.  Towards Universal Physical Attacks on Single Object Tracking , 2021, AAAI.

[11]  Kai-Kuang Ma,et al.  Rpattack: Refined Patch Attack on General Object Detectors , 2021, 2021 IEEE International Conference on Multimedia and Expo (ICME).

[12]  Y. Qiao,et al.  ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Alec Radford,et al.  Zero-Shot Text-to-Image Generation , 2021, ICML.

[14]  J. H. Metzen,et al.  Efficient Certified Defenses Against Patch Attacks on Image Classifiers , 2021, ICLR.

[15]  S. Gelly,et al.  An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2020, ICLR.

[16]  Yu Qiao,et al.  Efficient Image Super-Resolution Using Pixel Attention , 2020, ECCV Workshops.

[17]  Chong Xiang,et al.  PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields , 2020, ArXiv.

[18]  Zhanyuan Zhang,et al.  Clipped BagNet: Defending Against Sticker Attacks with Clipped Bag-of-features , 2020, 2020 IEEE Security and Privacy Workshops (SPW).

[19]  Tom Goldstein,et al.  Certified Defenses for Adversarial Patches , 2020, ICLR.

[20]  Alexander Levine,et al.  (De)Randomized Smoothing for Certifiable Defense against Patch Attacks , 2020, NeurIPS.

[21]  Florian Tramèr,et al.  On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.

[22]  L. Davis,et al.  Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors , 2019, ECCV.

[23]  Zhengxing Sun,et al.  Co-saliency Detection Based on Hierarchical Consistency , 2019, ACM Multimedia.

[24]  Cihang Xie,et al.  Universal Physical Camouflage Attacks on Object Detectors , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[25]  Bo Li,et al.  Detecting Robust Co-Saliency with Recurrent Co-Attention Neural Network , 2019, IJCAI.

[26]  Zhengxing Sun,et al.  SuperVAE: Superpixelwise Variational Autoencoder for Salient Object Detection , 2019, AAAI.

[27]  Bo Li,et al.  Two-B-real Net: Two-branch Network for Real-time Salient Object Detection , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[28]  Matthias Bethge,et al.  Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet , 2019, ICLR.

[29]  Salman Khan,et al.  Local Gradients Smoothing: Defense Against Localized Adversarial Attacks , 2018, 2019 IEEE Winter Conference on Applications of Computer Vision (WACV).

[30]  Jamie Hayes,et al.  On Visible Adversarial Perturbations & Digital Watermarking , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[31]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[32]  Yoav Goldberg,et al.  LaVAN: Localized and Visible Adversarial Noise , 2018, ICML.

[33]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[34]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[35]  Zhuowen Tu,et al.  Aggregated Residual Transformations for Deep Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[36]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[37]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[38]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[39]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[40]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[41]  Lingjuan Lyu,et al.  A Practical Data-Free Approach to One-shot Federated Learning with Heterogeneity , 2021, ArXiv.

[42]  9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 , 2021, ICLR.

[43]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[44]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[45]  Bo Li Group-Wise Deep Object Co-Segmentation With Co-Attention Recurrent Neural Network , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).