Seeing isn't Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors
暂无分享,去创建一个
Ruigang Liang | Shengzhi Zhang | Hong Zhu | Yue Zhao | Kai Chen | Qintao Shen | Yue Zhao | Kai Chen | Shengzhi Zhang | Ruigang Liang | Qintao Shen | Hong Zhu
[1] Duen Horng Chau,et al. ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector , 2018, ECML/PKDD.
[2] Matti Pietikäinen,et al. Deep Learning for Generic Object Detection: A Survey , 2018, International Journal of Computer Vision.
[3] Ali Farhadi,et al. YOLOv3: An Incremental Improvement , 2018, ArXiv.
[4] Mingyan Liu,et al. Realistic Adversarial Examples in 3D Meshes , 2018, ArXiv.
[5] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[6] Dumitru Erhan,et al. Deep Neural Networks for Object Detection , 2013, NIPS.
[7] Ross B. Girshick,et al. Fast R-CNN , 2015, 1504.08083.
[8] Adam Van Etten,et al. You Only Look Twice: Rapid Multi-Scale Object Detection In Satellite Imagery , 2018, ArXiv.
[9] Zoubin Ghahramani,et al. A study of the effect of JPG compression on adversarial images , 2016, ArXiv.
[10] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[11] Xiaolin Hu,et al. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[12] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[13] Ali Farhadi,et al. YOLO9000: Better, Faster, Stronger , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Alan L. Yuille,et al. Feature Denoising for Improving Adversarial Robustness , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Dawn Song,et al. Physical Adversarial Examples for Object Detectors , 2018, WOOT @ USENIX Security Symposium.
[16] Kaiming He,et al. Mask R-CNN , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[17] Xiangyu Zhang,et al. Light-Head R-CNN: In Defense of Two-Stage Object Detector , 2017, ArXiv.
[18] Trevor Darrell,et al. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[19] David A. Forsyth,et al. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles , 2017, ArXiv.
[20] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[21] Alan L. Yuille,et al. Mitigating adversarial effects through randomization , 2017, ICLR.
[22] Wei Liu,et al. SSD: Single Shot MultiBox Detector , 2015, ECCV.
[23] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[24] Jian Sun,et al. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition , 2015, IEEE Trans. Pattern Anal. Mach. Intell..
[25] Yi Li,et al. R-FCN: Object Detection via Region-based Fully Convolutional Networks , 2016, NIPS.
[26] Dawn Xiaodong Song,et al. Adversarial Examples for Generative Models , 2017, 2018 IEEE Security and Privacy Workshops (SPW).
[27] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[28] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[29] Alan L. Yuille,et al. Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[30] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[31] Yue Zhao,et al. CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition , 2018, USENIX Security Symposium.
[32] Xiang Zhang,et al. OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks , 2013, ICLR.
[33] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[34] Jian Sun,et al. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[35] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[36] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[37] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[38] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[39] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[40] Ali Farhadi,et al. You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[41] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.