Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
暂无分享,去创建一个
Surya Nepal | Chaoran Li | Yang Xiang | Sheng Wen | Derui Wang | S. Nepal | S. Wen | Qing-Long Han | Yang Xiang | Xiangyu Zhang | Derui Wang | Chaoran Li
[1] James Bailey,et al. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality , 2018, ICLR.
[2] Kaiming He,et al. Feature Pyramid Networks for Object Detection , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[4] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[5] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[6] David A. Forsyth,et al. SafetyNet: Detecting and Rejecting Adversarial Examples Robustly , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[7] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[8] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[9] Ross B. Girshick,et al. Focal Loss for Dense Object Detection , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[10] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[11] Raghav Gurbaxani,et al. Traits & Transferability of Adversarial Examples against Instance Segmentation & Object Detection , 2018, ArXiv.
[12] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[13] Keyu Lu,et al. Efficient deep network for vision-based object detection in robotic applications , 2017, Neurocomputing.
[14] Aleksander Madry,et al. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations , 2017, ArXiv.
[15] Patrick D. McDaniel,et al. Adversarial Examples for Malware Detection , 2017, ESORICS.
[16] Toon Goedemé,et al. Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[17] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Mani Srivastava,et al. GenAttack: practical black-box attacks with gradient-free optimization , 2018, GECCO.
[19] Sebastian Nowozin,et al. Learning to Filter Object Detections , 2017, GCPR.
[20] Vladlen Koltun,et al. Multi-Task Learning as Multi-Objective Optimization , 2018, NeurIPS.
[21] Trevor Darrell,et al. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[22] Luc Van Gool,et al. The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.
[23] Zhi Yang,et al. Deep transfer learning for military object recognition under small training set condition , 2019, Neural Computing and Applications.
[24] Shiyu Chang,et al. Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization , 2018, NeurIPS.
[25] Haibin Yu,et al. Speedup 3-D Texture-Less Object Recognition Against Self-Occlusion for Intelligent Manufacturing , 2019, IEEE Transactions on Cybernetics.
[26] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[27] Uri Shaham,et al. Understanding adversarial training: Increasing local stability of supervised models through robust optimization , 2015, Neurocomputing.
[28] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[29] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[30] Duen Horng Chau,et al. ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector , 2018, ECML/PKDD.
[31] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[32] Álvaro García-Martín,et al. People detection in surveillance: classification and evaluation , 2015, IET Comput. Vis..
[33] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Ting Wang,et al. Model-Reuse Attacks on Deep Learning Systems , 2018, CCS.
[35] Xiao Chen,et al. Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection , 2018, IEEE Transactions on Information Forensics and Security.
[36] Matti Pietikäinen,et al. Deep Learning for Generic Object Detection: A Survey , 2018, International Journal of Computer Vision.
[37] Kamyar Azizzadenesheli,et al. Stochastic Activation Pruning for Robust Adversarial Defense , 2018, ICLR.
[38] Dejing Dou,et al. HotFlip: White-Box Adversarial Examples for Text Classification , 2017, ACL.
[39] Jiajun Lu,et al. Adversarial Examples that Fool Detectors , 2017, ArXiv.
[40] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[41] Bernt Schiele,et al. A Convnet for Non-maximum Suppression , 2015, GCPR.
[42] Vittorio Ferrari,et al. End-to-End Training of Object Class Detectors for Mean Average Precision , 2016, ACCV.
[43] Luc Van Gool,et al. Efficient Non-Maximum Suppression , 2006, 18th International Conference on Pattern Recognition (ICPR'06).
[44] Zhe Chen,et al. Context Refinement for Object Detection , 2018, ECCV.
[45] Ali Farhadi,et al. YOLOv3: An Incremental Improvement , 2018, ArXiv.
[46] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[47] Larry S. Davis,et al. Soft-NMS — Improving Object Detection with One Line of Code , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[48] Ross B. Girshick,et al. Fast R-CNN , 2015, 1504.08083.
[49] Kaizhu Huang,et al. A Unified Gradient Regularization Family for Adversarial Examples , 2015, 2015 IEEE International Conference on Data Mining.
[50] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[51] James Philbin,et al. FaceNet: A unified embedding for face recognition and clustering , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[52] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[53] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[54] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[55] Dawn Xiaodong Song,et al. Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms , 2018, ECCV.
[56] Ruigang Liang,et al. Seeing isn't Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors , 2019, CCS.
[57] Radha Poovendran,et al. Semantic Adversarial Examples , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[58] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[59] De Xu,et al. Face Detection With Different Scales Based on Faster R-CNN , 2019, IEEE Transactions on Cybernetics.
[60] Jianming Zhang,et al. A Real-Time Chinese Traffic Sign Detection Algorithm Based on Modified YOLOv2 , 2017, Algorithms.
[61] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[62] Bernt Schiele,et al. Learning Non-maximum Suppression , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[63] Wei Liu,et al. SSD: Single Shot MultiBox Detector , 2015, ECCV.
[64] Ali Farhadi,et al. You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[65] Ali Farhadi,et al. YOLO9000: Better, Faster, Stronger , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[66] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[67] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[68] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[69] Alan L. Yuille,et al. Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[70] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[71] Aleksander Madry,et al. Exploring the Landscape of Spatial Robustness , 2017, ICML.
[72] Alexander Wong,et al. Fast YOLO: A Fast You Only Look Once System for Real-time Embedded Object Detection in Video , 2017, ArXiv.
[73] Dacheng Tao,et al. Adversarial Examples for Hamming Space Search , 2020, IEEE Transactions on Cybernetics.
[74] Dawn Song,et al. Physical Adversarial Examples for Object Detectors , 2018, WOOT @ USENIX Security Symposium.
[75] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.