暂无分享,去创建一个
Hassan Foroosh | Yang Zhang | Ankit Sharma | Shengnan Hu | Sumit Laha | H. Foroosh | Sumit Laha | Yang Zhang | Shengnan Hu | Ankit Sharma
[1] Luc Van Gool,et al. The Pascal Visual Object Classes Challenge: A Retrospective , 2014, International Journal of Computer Vision.
[2] Jiajun Lu,et al. Adversarial Examples that Fool Detectors , 2017, ArXiv.
[3] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[4] Logan Engstrom,et al. Black-box Adversarial Attacks with Limited Queries and Information , 2018, ICML.
[5] Alan L. Yuille,et al. Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[6] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[7] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[8] Ross B. Girshick,et al. Mask R-CNN , 2017, 1703.06870.
[9] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[10] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[11] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.
[12] Philip H. S. Torr,et al. On the Robustness of Semantic Segmentation Models to Adversarial Attacks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[13] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[14] David A. Forsyth,et al. Standard detectors aren't (currently) fooled by physical adversarial stop signs , 2017, ArXiv.
[15] Kocsis Zoltán Tamás,et al. IEEE World Congress on Computational Intelligence , 2019, IEEE Computational Intelligence Magazine.
[16] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[17] Wei Liu,et al. SSD: Single Shot MultiBox Detector , 2015, ECCV.
[18] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[19] Ali Farhadi,et al. YOLOv3: An Incremental Improvement , 2018, ArXiv.
[20] Hassan Foroosh,et al. CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the Wild , 2018, ICLR.
[21] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[22] A. Berny,et al. Statistical machine learning and combinatorial optimization , 2001 .
[23] David A. Forsyth,et al. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles , 2017, ArXiv.
[24] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[25] Tom Schaul,et al. Natural Evolution Strategies , 2008, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence).
[26] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[27] Xin Liu,et al. DPATCH: An Adversarial Patch Attack on Object Detectors , 2018, SafeAI@AAAI.
[28] Jinfeng Yi,et al. Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning , 2017, ACL.
[29] Mark Lee,et al. On Physical Adversarial Patches for Object Detection , 2019, ArXiv.
[30] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[31] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[32] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[33] Wesam A. Sakla,et al. A Large Contextual Dataset for Classification, Detection and Counting of Cars with Deep Learning , 2016, ECCV.
[34] Tong Zhang,et al. NATTACK: A STRONG AND UNIVERSAL GAUSSIAN BLACK-BOX ADVERSARIAL ATTACK , 2018 .