暂无分享,去创建一个
[1] Duen Horng Chau,et al. ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector , 2018, ECML/PKDD.
[2] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[3] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[4] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[5] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[6] Ali Farhadi,et al. YOLOv3: An Incremental Improvement , 2018, ArXiv.
[7] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[8] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[9] Parham Aarabi,et al. Adversarial Attacks on Face Detectors Using Neural Net Based Constrained Optimization , 2018, 2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP).
[10] Xin Liu,et al. DPatch: Attacking Object Detectors with Adversarial Patches , 2018, ArXiv.
[11] Alan L. Yuille,et al. Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[12] Toon Goedemé,et al. Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[13] Lujo Bauer,et al. Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition , 2018, ArXiv.
[14] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.