Adversarial Color Projection: A Projector-Based Physical Attack to DNNs

Recent research has demonstrated that deep neural networks (DNNs) are vulnerable to adversarial perturbations. Therefore, it is imperative to evaluate the resilience of advanced DNNs to adversarial attacks. However, traditional methods that use stickers as physical perturbations to deceive classifiers face challenges in achieving stealthiness and are susceptible to printing loss. Recently, advancements in physical attacks have utilized light beams, such as lasers, to perform attacks, where the optical patterns generated are artificial rather than natural. In this work, we propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP), which manipulates the physical parameters of color projection to perform an adversarial attack. We evaluate our approach on three crucial criteria: effectiveness, stealthiness, and robustness. In the digital environment, we achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100% in the indoor test and 82.14% in the outdoor test. The adversarial samples generated by AdvCP are compared with baseline samples to demonstrate the stealthiness of our approach. When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate in all cases, which verifies the robustness of AdvCP. Finally, we consider the potential threats posed by AdvCP to future vision-based systems and applications and suggest some ideas for light-based physical attacks.

[1]  Muhammad Abdullah Hanif,et al.  AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision Systems , 2023, ArXiv.

[2]  T. Zhang,et al.  Adversarial Attack with Raindrops , 2023, ArXiv.

[3]  Yichuang Zhang,et al.  Boosting transferability of physical attack against detectors by redistributing separable attention , 2023, Pattern Recognit..

[4]  Harashta Tatimma Larasati,et al.  DTA: Physical Camouflage Attacks using Differentiable Transformation Network , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Deming Zhai,et al.  Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Xiaolin Hu,et al.  Adversarial Texture for Fooling Person Detectors in the Physical World , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Bao Gia Doan,et al.  TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems , 2021, IEEE Transactions on Information Forensics and Security.

[8]  Jun Zhu,et al.  Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based Prior , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Marius Pedersen,et al.  Impact of Colour on Robustness of Deep Neural Networks , 2021, 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW).

[10]  Shirui Pan,et al.  Robust Physical-World Attacks on Face Recognition , 2021, Pattern Recognit..

[11]  Xiaoqian Chen,et al.  FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack , 2021, AAAI.

[12]  Yuefeng Chen,et al.  AdvDrop: Adversarial Attack to DNNs by Dropping Information , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[13]  Stanley H. Chan,et al.  Optical Adversarial Attack , 2021, 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW).

[14]  Yining Hu,et al.  Digital Watermark Perturbation for Adversarial Examples to Fool Deep Neural Networks , 2021, 2021 International Joint Conference on Neural Networks (IJCNN).

[15]  Xingxing Wei,et al.  Adversarial Sticker: A Stealthy Attack Method in the Physical World , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  Yuan He,et al.  Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[17]  Xianglong Liu,et al.  Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Earlence Fernandes,et al.  Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Shize Huang,et al.  An improved ShapeShifter method of generating adversarial examples for physical attacks on stop signs against Faster R-CNNs , 2020, Comput. Secur..

[20]  Qing Guo,et al.  Adversarial Rain Attack and Defensive Deraining for DNN Perception , 2020, 2009.09205.

[21]  Hao Yang,et al.  Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[22]  James Bailey,et al.  Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[23]  Ya Li,et al.  Principal Component Adversarial Example , 2020, IEEE Transactions on Image Processing.

[24]  A. Cavallaro,et al.  ColorFool: Semantic Adversarial Colorization , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[25]  M. Larson,et al.  Towards Large Yet Imperceptible Adversarial Image Perturbations With Perceptual Color Distance , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[26]  Quanfu Fan,et al.  Adversarial T-Shirt! Evading Person Detectors in a Physical World , 2019, ECCV.

[27]  Xiaojiang Du,et al.  VLA: A Practical Visible Light-based Attack on Face Recognition Systems in Physical World , 2019, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[28]  Anqi Xu,et al.  Physical Adversarial Textures That Fool Visual Object Tracking , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[29]  J. Zico Kolter,et al.  Adversarial camera stickers: A physical camera-based attack on deep learning systems , 2019, ICML.

[30]  Hassan Foroosh,et al.  CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the Wild , 2018, ICLR.

[31]  Anqi Xu,et al.  Maximal Jacobian-based Saliency Map Attack , 2018, ArXiv.

[32]  Chun-Liang Li,et al.  Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer , 2018, ICLR.

[33]  Dawn Song,et al.  Physical Adversarial Examples for Object Detectors , 2018, WOOT @ USENIX Security Symposium.

[34]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[35]  Duen Horng Chau,et al.  ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector , 2018, ECML/PKDD.

[36]  Alan L. Yuille,et al.  Improving Transferability of Adversarial Examples With Input Diversity , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[37]  Radha Poovendran,et al.  Semantic Adversarial Examples , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[38]  Xiaofeng Wang,et al.  Invisible Mask: Practical Attacks on Face Recognition with Infrared , 2018, ArXiv.

[39]  Mark Sandler,et al.  MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[40]  Martín Abadi,et al.  Adversarial Patch , 2017, ArXiv.

[41]  Chenxi Liu,et al.  Adversarial Attacks Beyond the Image Space , 2017, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[42]  Kouichi Sakurai,et al.  One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.

[43]  Jun Zhu,et al.  Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[44]  Jinfeng Yi,et al.  EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples , 2017, AAAI.

[45]  Dawn Song,et al.  Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.

[46]  Logan Engstrom,et al.  Synthesizing Robust Adversarial Examples , 2017, ICML.

[47]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[48]  David Wagner,et al.  Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.

[49]  Seyed-Mohsen Moosavi-Dezfooli,et al.  Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[50]  Lujo Bauer,et al.  Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.

[51]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[52]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[53]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[54]  Bolei Zhou,et al.  Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[55]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[56]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[57]  Seyed-Mohsen Moosavi-Dezfooli,et al.  DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[58]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[59]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[60]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[61]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[62]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[63]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[64]  James Kennedy,et al.  Particle swarm optimization , 2002, Proceedings of ICNN'95 - International Conference on Neural Networks.

[65]  P. Bas,et al.  Generating Adversarial Images in Quantized Domains , 2022, IEEE Transactions on Information Forensics and Security.

[66]  Jin Song Dong,et al.  Adversarial Adaptive Neighborhood With Feature Importance-Aware Convex Interpolation , 2021, IEEE Transactions on Information Forensics and Security.

[67]  Jia Deng,et al.  A large-scale hierarchical image database , 2009, CVPR 2009.

[68]  Michael I. Jordan,et al.  Advances in Neural Information Processing Systems 30 , 1995 .