暂无分享,去创建一个
Rama Chellappa | Alexander Levine | Soheil Feizi | Chun Pong Lau | Jiang Liu | Ramalingam Chellappa | Alexander Levine | S. Feizi | Jiangjiang Liu
[1] L. Davis,et al. Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors , 2019, ECCV.
[2] Tom Goldstein,et al. Detection as Regression: Certified Object Detection by Median Smoothing , 2020, ArXiv.
[3] Facebook,et al. Houdini : Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples , 2017 .
[4] T. Gittings,et al. Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks , 2020, ACCV.
[5] Prateek Mittal,et al. DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks , 2021, CCS.
[6] Keith Manville,et al. APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection , 2019, ECCV.
[7] Xin Liu,et al. DPATCH: An Adversarial Patch Attack on Object Detectors , 2018, SafeAI@AAAI.
[8] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Huanqian Yan,et al. Object Hider: Adversarial Patch Attack Against Object Detectors , 2020, ArXiv.
[10] Y. Vorobeychik,et al. Defending Against Physically Realizable Attacks on Image Classification , 2019, ICLR.
[11] Jun-Cheng Chen,et al. Class-Aware Robust Adversarial Training for Object Detection , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Sébastien Marcel,et al. Torchvision the machine-vision package of torch , 2010, ACM Multimedia.
[13] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[14] Mark Lee,et al. On Physical Adversarial Patches for Object Detection , 2019, ArXiv.
[15] Tom Goldstein,et al. Certified Defenses for Adversarial Patches , 2020, ICLR.
[16] Wei Liu,et al. SSD: Single Shot MultiBox Detector , 2015, ECCV.
[17] Toon Goedemé,et al. Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[18] Bernt Schiele,et al. Adversarial Training against Location-Optimized Adversarial Patches , 2020, ECCV Workshops.
[19] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.
[20] Thomas Brox,et al. U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.
[21] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[22] Michael McCoyd,et al. Minority Reports Defense: Defending Against Adversarial Patches , 2020, ACNS Workshops.
[23] Prateek Mittal,et al. PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking , 2020, USENIX Security Symposium.
[24] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[25] Jingjing Hu,et al. Towards a physical-world adversarial patch for blinding object detection models , 2020, Inf. Sci..
[26] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[27] Salman Khan,et al. Local Gradients Smoothing: Defense Against Localized Adversarial Attacks , 2018, 2019 IEEE Winter Conference on Applications of Computer Vision (WACV).
[28] Yaroslav Bulatov,et al. xView: Objects in Context in Overhead Imagery , 2018, ArXiv.
[29] Ali Farhadi,et al. YOLO9000: Better, Faster, Stronger , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[31] Ross B. Girshick,et al. Fast R-CNN , 2015, 1504.08083.
[32] Philip H. S. Torr,et al. On the Robustness of Semantic Segmentation Models to Adversarial Attacks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[33] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[34] Siwei Lyu,et al. Exploring the Vulnerability of Single Shot Module in Object Detectors via Imperceptible Background Patches , 2019, BMVC.
[35] Haichao Zhang,et al. Towards Adversarially Robust Object Detection , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[36] Alexander Levine,et al. (De)Randomized Smoothing for Certifiable Defense against Patch Attacks , 2020, NeurIPS.
[37] Jamie Hayes,et al. On Visible Adversarial Perturbations & Digital Watermarking , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[38] Deyun Chen,et al. Attention-Guided Digital Adversarial Patches on Visual Detection , 2021, Secur. Commun. Networks.
[39] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[40] Kaiming He,et al. Feature Pyramid Networks for Object Detection , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[41] Naijin Liu,et al. Adversarial YOLO: Defense Human Detection Patch Attacks via Detecting Adversarial Patches , 2021, ArXiv.
[42] Zoubin Ghahramani,et al. A study of the effect of JPG compression on adversarial images , 2016, ArXiv.
[43] Dawn Song,et al. Physical Adversarial Examples for Object Detectors , 2018, WOOT @ USENIX Security Symposium.
[44] Akshayvarun Subramanya,et al. Role of Spatial Context in Adversarial Robustness for Object Detection , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[45] A. Vahab,et al. Applications of Object Detection System , 2019 .
[46] Duen Horng Chau,et al. ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector , 2018, ECML/PKDD.
[47] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[48] Yoav Goldberg,et al. LaVAN: Localized and Visible Adversarial Noise , 2018, ICML.