暂无分享,去创建一个
Xiaofeng Wang | Xiangyu Liu | Weili Han | Di Tang | Zhe Zhou | Kehuan Zhang | Xiaofeng Wang | Zhe Zhou | Kehuan Zhang | Weili Han | Xiangyu Liu | Di Tang
[1] Robert H. Deng,et al. Understanding OSN-based facial disclosure against face authentication systems , 2014, AsiaCCS.
[2] Xiangyu Zhu,et al. High-fidelity Pose and Expression Normalization for face recognition in the wild , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[4] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[5] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[6] Xiaogang Wang,et al. DeepID3: Face Recognition with Very Deep Neural Networks , 2015, ArXiv.
[7] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[8] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[9] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[10] James Philbin,et al. FaceNet: A unified embedding for face recognition and clustering , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Michael P. Wellman,et al. SoK: Security and Privacy in Machine Learning , 2018, 2018 IEEE European Symposium on Security and Privacy (EuroS&P).
[12] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[13] Ming Yang,et al. DeepFace: Closing the Gap to Human-Level Performance in Face Verification , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[14] George Danezis,et al. Machine Learning as an Adversarial Service: Learning Black-Box Adversarial Examples , 2017, ArXiv.
[15] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[17] Radha Poovendran,et al. Blocking Transferability of Adversarial Examples in Black-Box Learning Systems , 2017, ArXiv.
[18] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[19] Russell C. Eberhart,et al. A new optimizer using particle swarm theory , 1995, MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science.
[20] Wenyuan Xu,et al. DolphinAttack: Inaudible Voice Commands , 2017, CCS.
[21] George Danezis,et al. Learning Universal Adversarial Perturbations with Generative Models , 2017, 2018 IEEE Security and Privacy Workshops (SPW).
[22] Takayuki Yamada,et al. Use of invisible noise signals to prevent privacy invasion through face recognition from camera images , 2012, ACM Multimedia.
[23] Prateek Mittal,et al. POSTER: Inaudible Voice Commands , 2017, CCS.
[24] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[25] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[26] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[27] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[28] Nina Narodytska,et al. Simple Black-Box Adversarial Perturbations for Deep Networks , 2016, ArXiv.
[29] Xiaogang Wang,et al. Deep Learning Face Representation from Predicting 10,000 Classes , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[30] Michael P. Wellman,et al. Towards the Science of Security and Privacy in Machine Learning , 2016, ArXiv.