MLIA: modulated LED illumination-based adversarial attack on traffic sign recognition system for autonomous vehicle
暂无分享,去创建一个
S. Yiu | Z. L. Jiang | Junbin Fang | Yini Lin | Canjian Jiang | You Jiang | Yixuan Shen | Yu Cheng | Sicheng Long | Danjie Li | Siyuan Dai
[1] Zhangjie Fu,et al. Remote Attacks on Drones Vision Sensors: An Empirical Study , 2022, IEEE Transactions on Dependable and Secure Computing.
[2] Zhe Zhang,et al. Efficient Federated Learning With Spike Neural Networks for Traffic Sign Recognition , 2022, IEEE Transactions on Vehicular Technology.
[3] Stanley H. Chan,et al. Optical Adversarial Attack , 2021, 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW).
[4] Zsolt Szalay,et al. A collection of easily deployable adversarial traffic sign stickers , 2021, Autom..
[5] Yujie Li,et al. Adaptive Square Attack: Fooling Autonomous Cars With Adversarial Traffic Signs , 2021, IEEE Internet of Things Journal.
[6] Yuan He,et al. Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Xianglong Liu,et al. Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Hanwang Zhang,et al. Sill-Net: Feature Augmentation with Separated Illumination Representation , 2021, ArXiv.
[9] Kaiqi Huang,et al. Universal adversarial perturbations against object detection , 2021, Pattern Recognit..
[10] Naman K. Gupta,et al. ultralytics/yolov5: v3.1 - Bug Fixes and Performance Improvements , 2020 .
[11] Hao Yang,et al. Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[12] James Bailey,et al. Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[14] Mingyan Liu,et al. Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.
[15] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[16] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[17] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[18] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[20] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[21] Johannes Stallkamp,et al. Detection of traffic signs in real-world images: The German traffic sign detection benchmark , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).
[22] Johannes Stallkamp,et al. The German Traffic Sign Recognition Benchmark: A multi-class classification competition , 2011, The 2011 International Joint Conference on Neural Networks.
[23] I L Bailey,et al. Human electroretinogram responses to video displays, fluorescent lighting, and other high frequency sources. , 1991, Optometry and vision science : official publication of the American Academy of Optometry.