Adversarial Infrared Blocks: A Black-box Attack to Thermal Infrared Detectors at Multiple Angles in Physical World

Infrared imaging systems have a vast array of potential applications in pedestrian detection and autonomous driving, and their safety performance is of great concern. However, few studies have explored the safety of infrared imaging systems in real-world settings. Previous research has used physical perturbations such as small bulbs and thermal"QR codes"to attack infrared imaging detectors, but such methods are highly visible and lack stealthiness. Other researchers have used hot and cold blocks to deceive infrared imaging detectors, but this method is limited in its ability to execute attacks from various angles. To address these shortcomings, we propose a novel physical attack called adversarial infrared blocks (AdvIB). By optimizing the physical parameters of the adversarial infrared blocks, this method can execute a stealthy black-box attack on thermal imaging system from various angles. We evaluate the proposed method based on its effectiveness, stealthiness, and robustness. Our physical tests show that the proposed method achieves a success rate of over 80% under most distance and angle conditions, validating its effectiveness. For stealthiness, our method involves attaching the adversarial infrared block to the inside of clothing, enhancing its stealthiness. Additionally, we test the proposed method on advanced detectors, and experimental results demonstrate an average attack success rate of 51.2%, proving its robustness. Overall, our proposed AdvIB method offers a promising avenue for conducting stealthy, effective and robust black-box attacks on thermal imaging system, with potential implications for real-world safety and security applications.

[1]  Xingxing Wei,et al.  Physically Adversarial Infrared Patches with Learnable Shapes and Locations , 2023, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Muhammad Abdullah Hanif,et al.  AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision Systems , 2023, ArXiv.

[3]  Zheng Wang,et al.  HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable Design , 2022, AAAI.

[4]  Jianmin Li,et al.  Infrared Invisible Clothing: Hiding from Infrared Detectors at Multiple Angles in Real World , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Harashta Tatimma Larasati,et al.  DTA: Physical Camouflage Attacks using Differentiable Transformation Network , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Seungju Cho,et al.  Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Linlin Shen,et al.  Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Deming Zhai,et al.  Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Xiaolin Hu,et al.  Adversarial Texture for Fooling Person Detectors in the Physical World , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Tianwei Zhang,et al.  Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving , 2022, ACM Multimedia.

[11]  Xianglong Liu,et al.  Universal Adversarial Patch Attack for Automatic Checkout Using Perceptual and Attentional Bias , 2021, IEEE Transactions on Image Processing.

[12]  Bao Gia Doan,et al.  TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems , 2021, IEEE Transactions on Information Forensics and Security.

[13]  Jun Zhu,et al.  Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based Prior , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  Xiaoqian Chen,et al.  FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack , 2021, AAAI.

[15]  Yuefeng Chen,et al.  AdvDrop: Adversarial Attack to DNNs by Dropping Information , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[16]  Yining Hu,et al.  Digital Watermark Perturbation for Adversarial Examples to Fool Deep Neural Networks , 2021, 2021 International Joint Conference on Neural Networks (IJCNN).

[17]  Irwin King,et al.  Improving the Transferability of Adversarial Samples with Adversarial Transformations , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Xingxing Wei,et al.  Adversarial Sticker: A Stealthy Attack Method in the Physical World , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Yuan He,et al.  Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  B. Wen,et al.  Recent Advances in Adversarial Training for Adversarial Robustness , 2021, IJCAI.

[21]  Jianmin Li,et al.  Fooling thermal infrared pedestrian detectors in real world using small bulbs , 2021, AAAI.

[22]  Asaf Shabtai,et al.  The Translucent Patch: A Physical and Universal Attack on Object Detectors , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[23]  Shize Huang,et al.  An improved ShapeShifter method of generating adversarial examples for physical attacks on stop signs against Faster R-CNNs , 2020, Comput. Secur..

[24]  Chi-Man Pun,et al.  Adversarial Image Attacks Using Multi-Sample and Most-Likely Ensemble Methods , 2020, ACM Multimedia.

[25]  Qing Guo,et al.  Adversarial Rain Attack and Defensive Deraining for DNN Perception , 2020, 2009.09205.

[26]  Nicolas Usunier,et al.  End-to-End Object Detection with Transformers , 2020, ECCV.

[27]  James Bailey,et al.  Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[28]  Ya Li,et al.  Principal Component Adversarial Example , 2020, IEEE Transactions on Image Processing.

[29]  Lei Ma,et al.  Amora: Black-box Adversarial Morphing Attack , 2019, ACM Multimedia.

[30]  A. Cavallaro,et al.  ColorFool: Semantic Adversarial Colorization , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[31]  M. Larson,et al.  Towards Large Yet Imperceptible Adversarial Image Perturbations With Perceptual Color Distance , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[32]  Toon Goedemé,et al.  Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[33]  Huajun Feng,et al.  Libra R-CNN: Towards Balanced Learning for Object Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[34]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[35]  Ali Farhadi,et al.  YOLOv3: An Incremental Improvement , 2018, ArXiv.

[36]  Alan L. Yuille,et al.  Improving Transferability of Adversarial Examples With Input Diversity , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[37]  Kouichi Sakurai,et al.  One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.

[38]  Jun Zhu,et al.  Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[39]  Ross B. Girshick,et al.  Focal Loss for Dense Object Detection , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[40]  Logan Engstrom,et al.  Synthesizing Robust Adversarial Examples , 2017, ICML.

[41]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[42]  Ross B. Girshick,et al.  Mask R-CNN , 2017, 1703.06870.

[43]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[44]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[45]  Rainer Storn,et al.  Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces , 1997, J. Glob. Optim..

[46]  James Kennedy,et al.  Particle swarm optimization , 2002, Proceedings of ICNN'95 - International Conference on Neural Networks.

[47]  P. Bas,et al.  Generating Adversarial Images in Quantized Domains , 2022, IEEE Transactions on Information Forensics and Security.

[48]  Jin Song Dong,et al.  Adversarial Adaptive Neighborhood With Feature Importance-Aware Convex Interpolation , 2021, IEEE Transactions on Information Forensics and Security.