Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks
暂无分享,去创建一个
Romaric Audigier | Quentin Bouniot | Angélique Loesch | Romaric Audigier | Quentin Bouniot | Angélique Loesch
[1] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[2] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[3] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[4] Song Bai,et al. Metric Attack and Defense for Person Re-identification , 2019 .
[5] Zhiguo Cao,et al. Good practices on building effective CNN baseline model for person re-identification , 2018, International Conference on Graphic and Image Processing.
[6] Giorgos Tolias,et al. Targeted Mismatch Adversarial Attack: Query With a Flower to Retrieve the Tower , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[7] Xiaolin Hu,et al. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[8] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[9] Francesco Solera,et al. Performance Measures and a Data Set for Multi-target, Multi-camera Tracking , 2016, ECCV Workshops.
[10] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[11] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[12] Nina Narodytska,et al. Simple Black-Box Adversarial Attacks on Deep Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[13] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[14] Matthias Bethge,et al. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.
[15] James Philbin,et al. FaceNet: A unified embedding for face recognition and clustering , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[17] Luiz Eduardo Soares de Oliveira,et al. Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[20] Lucas Beyer,et al. In Defense of the Triplet Loss for Person Re-Identification , 2017, ArXiv.
[21] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[22] Aleksander Madry,et al. On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.
[23] Martha Larson,et al. Who's Afraid of Adversarial Queries?: The Impact of Image Modifications on Content-based Image Retrieval , 2019, ICMR.
[24] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[25] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[26] Kurt Keutzer,et al. Trust Region Based Adversarial Attack on Neural Networks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Qi Tian,et al. Scalable Person Re-identification: A Benchmark , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[28] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[29] Yi Yang,et al. Person Re-identification: Past, Present and Future , 2016, ArXiv.