暂无分享,去创建一个
Kenneth T. Co | Emil C. Lupu | Luis Munoz-Gonz'alez | Leslie Kanthan | Luis Muñoz-González | Leslie Kanthan
[1] Kenneth T. Co,et al. Jacobian Regularization for Mitigating Universal Adversarial Perturbations , 2021, ICANN.
[2] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[3] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[5] Issei Sato,et al. On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Emil C. Lupu,et al. GhostBuster: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing , 2020, ArXiv.
[7] Kenneth T. Co,et al. Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks , 2018, CCS.
[8] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[9] Dan Boneh,et al. AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning , 2019, CCS.
[10] Edward Chou,et al. SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems , 2020, 2020 IEEE Security and Privacy Workshops (SPW).
[11] R. Venkatesh Babu,et al. Fast Feature Fool: A data independent approach to universal adversarial perturbations , 2017, BMVC.
[12] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[14] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[15] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[16] Kenneth T. Co,et al. Universal Adversarial Robustness of Texture and Shape-Biased Models , 2019, 2021 IEEE International Conference on Image Processing (ICIP).
[17] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[18] Dan Boneh,et al. Adversarial Training and Robustness for Multiple Perturbations , 2019, NeurIPS.
[19] J. Zico Kolter,et al. Fast is better than free: Revisiting adversarial training , 2020, ICLR.
[20] Kenneth T. Co,et al. Object Removal Attacks on LiDAR-based 3D Object Detectors , 2021, Proceedings Third International Workshop on Automotive and Autonomous Vehicle Security.
[21] Fabio Roli,et al. Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks , 2018, USENIX Security Symposium.
[22] Raquel Urtasun,et al. Physically Realizable Adversarial Examples for LiDAR Object Detection , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Tara N. Sainath,et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups , 2012, IEEE Signal Processing Magazine.
[24] Valentin Khrulkov,et al. Art of Singular Vectors and Universal Adversarial Perturbations , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[25] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[26] Larry S. Davis,et al. Universal Adversarial Training , 2018, AAAI.
[27] Ruigang Yang,et al. Adversarial Objects Against LiDAR-Based Autonomous Driving Systems , 2019, ArXiv.
[28] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[29] Thomas Brox,et al. Defending Against Universal Perturbations With Shared Adversarial Training , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[30] Ali Farhadi,et al. You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[32] Matthias Bethge,et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness , 2018, ICLR.
[33] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[34] Yi Sun,et al. Transfer of Adversarial Robustness Between Perturbation Types , 2019, ArXiv.
[35] Tejas S. Borkar,et al. Defending Against Universal Attacks Through Selective Feature Regeneration , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[37] Jian Liu,et al. Defense Against Universal Adversarial Perturbations , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[38] R. Venkatesh Babu,et al. Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[39] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[40] Toon Goedemé,et al. Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[41] Xin Liu,et al. DPATCH: An Adversarial Patch Attack on Object Detectors , 2018, SafeAI@AAAI.
[42] Dawn Song,et al. Physical Adversarial Examples for Object Detectors , 2018, WOOT @ USENIX Security Symposium.
[43] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.