暂无分享,去创建一个
Pascal Sturmfels | Ivan Evtimov | Tadayoshi Kohno | T. Kohno | I. Evtimov | Pascal Sturmfels | Tadayoshi Kohno
[1] A. Young,et al. Understanding face recognition. , 1986, British journal of psychology.
[2] Alex Pentland,et al. Face recognition using eigenfaces , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[3] Yuxiao Hu,et al. Face recognition using Laplacianfaces , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[4] Andrea Lagorio,et al. On the Use of SIFT Features for Face Authentication , 2006, 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'06).
[5] Matti Pietikäinen,et al. Face Description with Local Binary Patterns: Application to Face Recognition , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[6] Zbigniew W. Ras,et al. Facial Recognition , 2009, Encyclopedia of Data Warehousing and Mining.
[7] Ahmad-Reza Sadeghi,et al. Efficient Privacy-Preserving Face Recognition , 2009, ICISC.
[8] Xiaogang Wang,et al. Deep Learning Face Representation by Joint Identification-Verification , 2014, NIPS.
[9] Ming Yang,et al. DeepFace: Closing the Gap to Human-Level Performance in Face Verification , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[10] Jiwen Lu,et al. Discriminative Deep Metric Learning for Face Verification in the Wild , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[11] Shengcai Liao,et al. Learning Face Representation from Scratch , 2014, ArXiv.
[12] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[13] James Philbin,et al. FaceNet: A unified embedding for face recognition and clustering , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[15] Xiaogang Wang,et al. Deeply learned face representations are sparse, selective, and robust , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Yu Qiao,et al. A Discriminative Feature Learning Approach for Deep Face Recognition , 2016, ECCV.
[17] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[18] Ira Kemelmacher-Shlizerman,et al. The MegaFace Benchmark: 1 Million Faces for Recognition at Scale , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Chunming Tang,et al. Privacy-preserving face recognition with outsourced computation , 2016, Soft Comput..
[20] Seong Joon Oh,et al. Adversarial Image Perturbation for Privacy Protection A Game Theory Perspective , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[21] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[22] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[23] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[24] Helen Nissenbaum,et al. Engineering Privacy and Protest: A Case Study of AdNauseam , 2017, IWPE@SP.
[25] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[26] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[27] Lucas Beyer,et al. In Defense of the Triplet Loss for Person Re-Identification , 2017, ArXiv.
[28] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[29] Bhiksha Raj,et al. SphereFace: Deep Hypersphere Embedding for Face Recognition , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[31] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[32] Jinyuan Jia,et al. AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning , 2018, USENIX Security Symposium.
[33] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[34] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[35] Omkar M. Parkhi,et al. VGGFace2: A Dataset for Recognising Faces across Pose and Age , 2017, 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018).
[36] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[37] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[38] Tal Hassner,et al. Deep Face Recognition: A Survey , 2018, 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI).
[39] Xing Ji,et al. CosFace: Large Margin Cosine Loss for Deep Face Recognition , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[40] Patrick J. Grother,et al. Ongoing Face Recognition Vendor Test (FRVT) Part 2: Identification , 2018 .
[41] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[42] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[43] Tao Li,et al. AnonymousNet: Natural Face De-Identification With Measurable Privacy , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[44] Wei Liu,et al. Efficient Decision-Based Black-Box Adversarial Attacks on Face Recognition , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[45] Lujo Bauer,et al. A General Framework for Adversarial Examples with Objectives , 2017, ACM Trans. Priv. Secur..
[46] N. Gong,et al. MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples , 2019, CCS.
[47] Stefanos Zafeiriou,et al. ArcFace: Additive Angular Margin Loss for Deep Face Recognition , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[48] Florian Tramèr,et al. On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.
[49] B. Schneier,et al. Politics of Adversarial Machine Learning , 2020, SSRN Electronic Journal.
[50] Ben Y. Zhao,et al. Fawkes: Protecting Privacy against Unauthorized Deep Learning Models , 2020, USENIX Security Symposium.
[51] Gilles Perrouin,et al. Ethical Adversaries , 2020, SIGKDD Explor..
[52] Binghui Wang,et al. Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing , 2019, ICLR.
[53] Andreas Butz,et al. How to Trick AI: Users' Strategies for Protecting Themselves from Automatic Personality Assessment , 2020, CHI.
[54] A. Madry,et al. Data Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses , 2020 .
[55] Somesh Jha,et al. Face-Off: Adversarial Face Obfuscation , 2020, Proc. Priv. Enhancing Technol..
[56] Micah Goldblum,et al. LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition , 2021, ICLR.
[57] W. Feng,et al. On the (Im)Practicality of Adversarial Perturbation for Image Privacy , 2020, Proc. Priv. Enhancing Technol..
[58] Aleksandr Petiushko,et al. AdvHat: Real-World Adversarial Attack on ArcFace Face ID System , 2019, 2020 25th International Conference on Pattern Recognition (ICPR).