Human-Producible Adversarial Examples
暂无分享,去创建一个
[1] Nicolas Papernot,et al. Tubes Among Us: Analog Attack on Automatic Speaker Identification , 2022, USENIX Security Symposium.
[2] Peter Ondruska,et al. Autonomy 2.0: Why is self-driving always 5 years away? , 2021, ArXiv.
[3] Jonathon S. Hare,et al. Differentiable Drawing and Sketching , 2021, ArXiv.
[4] Bo Liu,et al. When Machine Learning Meets Privacy , 2020, ACM Comput. Surv..
[5] Sumit Singh Chauhan,et al. A review on genetic algorithm: past, present, and future , 2020, Multimedia Tools and Applications.
[6] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[7] L. Davis,et al. Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors , 2019, ECCV.
[8] Pin-Yu Chen,et al. Adversarial T-Shirt! Evading Person Detectors in a Physical World , 2019, ECCV.
[9] Aleksandr Petiushko,et al. AdvHat: Real-World Adversarial Attack on ArcFace Face ID System , 2019, 2020 25th International Conference on Pattern Recognition (ICPR).
[10] Toon Goedemé,et al. Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[11] Jenni A. M. Sidey-Gibbons,et al. Machine learning in medicine: a practical introduction , 2019, BMC Medical Research Methodology.
[12] Stella F. Lourenco,et al. Skeletal descriptions of shape provide unique perceptual information for object recognition , 2019, Scientific Reports.
[13] Zhe Zhou,et al. A survey of practical adversarial example attacks , 2018, Cybersecur..
[14] Bo Wang,et al. Machine Learning for Integrating Data in Biology and Medicine: Principles, Practice, and Opportunities , 2018, Inf. Fusion.
[15] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[16] Xiaofeng Wang,et al. Invisible Mask: Practical Attacks on Face Recognition with Infrared , 2018, ArXiv.
[17] Lujo Bauer,et al. A General Framework for Adversarial Examples with Objectives , 2017, ACM Trans. Priv. Secur..
[18] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[19] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[20] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[21] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[22] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[23] Matthew Millard,et al. Drawing accuracy measured using polygons , 2013, Electronic Imaging.
[24] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[25] J. Tchalenko. Segmentation and accuracy in copying and drawing: Experts and beginners , 2009, Vision Research.
[26] Adam Finkelstein,et al. Where do people draw lines? , 2008, ACM Trans. Graph..
[27] Graham Rawlinson,et al. The Significance of Letter Position in Word Recognition , 2007, IEEE Aerospace and Electronic Systems Magazine.
[28] D. Grossi,et al. The selective inability to draw horizontal lines: a peculiar constructional disorder , 1998, Journal of neurology, neurosurgery, and psychiatry.
[29] I. Biederman,et al. Scene perception: Detecting and judging objects undergoing relational violations , 1982, Cognitive Psychology.
[30] Banupriya,et al. SURVEY ON FACE RECOGNITION , 2014 .
[31] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[32] G. D. Magoulas,et al. Under review as a conference paper at ICLR 2017 , 2022 .