MLIA: modulated LED illumination-based adversarial attack on traffic sign recognition system for autonomous vehicle

Traffic sign recognition (TSR) system is essential for autonomous vehicle and is vulnerable to security threats from adversarial attacks. The existing adversarial attacks for TSR are invasive and suffer from poor concealment and high computational complexity, and thus have low feasibility in real-world scenarios. This paper proposes a non-invasive modulated LED illumination-based adversarial attack scheme. By generating luminance flashes imperceptible to human eyes through fast intensity modulation of lighting such as LED streetlights and exploiting the rolling shutter mechanism of CMOS sensors of in-vehicle imaging system, the proposed attack scheme can successfully perform adversarial attacks on TSR system by implanting luminance information perturbations into the images acquired by autonomous vehicle and thus poisoning the image data fed into TSR system. Depending on the modulation frequency and pattern of LED illumination, the proposed attack scheme enables denial of service (DoS) attack that leads to traffic sign detection failure and escape attack that leads to traffic sign misclassification, with the advantages of superior concealment, low computational complexity and high practical feasibility. Experiments are conducted with two benchmark datasets (GTSDB and GTSRB) and two state-of-the-art models of TSR detection and TSR classification, YOLOv5m and Sill-Net respectively, in both the digital and physical world. Experimental results show that the proposed DoS attack on the TSR detection model (YOLOv5m) can reach the success rate of 90.00% and the proposed escape attack on the TSR classification model (Sill-Net) can achieve the success rate of 35.00%.

[1]  Zhangjie Fu,et al.  Remote Attacks on Drones Vision Sensors: An Empirical Study , 2022, IEEE Transactions on Dependable and Secure Computing.

[2]  Zhe Zhang,et al.  Efficient Federated Learning With Spike Neural Networks for Traffic Sign Recognition , 2022, IEEE Transactions on Vehicular Technology.

[3]  Stanley H. Chan,et al.  Optical Adversarial Attack , 2021, 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW).

[4]  Zsolt Szalay,et al.  A collection of easily deployable adversarial traffic sign stickers , 2021, Autom..

[5]  Yujie Li,et al.  Adaptive Square Attack: Fooling Autonomous Cars With Adversarial Traffic Signs , 2021, IEEE Internet of Things Journal.

[6]  Yuan He,et al.  Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Xianglong Liu,et al.  Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Hanwang Zhang,et al.  Sill-Net: Feature Augmentation with Separated Illumination Representation , 2021, ArXiv.

[9]  Kaiqi Huang,et al.  Universal adversarial perturbations against object detection , 2021, Pattern Recognit..

[10]  Naman K. Gupta,et al.  ultralytics/yolov5: v3.1 - Bug Fixes and Performance Improvements , 2020 .

[11]  Hao Yang,et al.  Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[12]  James Bailey,et al.  Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[14]  Mingyan Liu,et al.  Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.

[15]  Atul Prakash,et al.  Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.

[16]  Abhishek Das,et al.  Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[17]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[18]  Seyed-Mohsen Moosavi-Dezfooli,et al.  DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[20]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[21]  Johannes Stallkamp,et al.  Detection of traffic signs in real-world images: The German traffic sign detection benchmark , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).

[22]  Johannes Stallkamp,et al.  The German Traffic Sign Recognition Benchmark: A multi-class classification competition , 2011, The 2011 International Joint Conference on Neural Networks.

[23]  I L Bailey,et al.  Human electroretinogram responses to video displays, fluorescent lighting, and other high frequency sources. , 1991, Optometry and vision science : official publication of the American Academy of Optometry.