Improving Robustness to Attacks Against Vertex Classification
暂无分享,去创建一个
[1] Tina Eliassi-Rad,et al. Evaluating Statistical Tests for Within-Network Classifiers of Relational Data , 2009, 2009 Ninth IEEE International Conference on Data Mining.
[2] Christos Faloutsos,et al. It's who you know: graph mining using recursive structural features , 2011, KDD.
[3] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[4] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[5] Jure Leskovec,et al. node2vec: Scalable Feature Learning for Networks , 2016, KDD.
[6] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[8] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[9] Jennifer Neville,et al. Deep Collective Inference , 2017, AAAI.
[10] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[11] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[12] Nicholas Carlini,et al. On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses , 2018, ArXiv.
[13] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[14] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[15] Le Song,et al. Adversarial Attack on Graph Structured Data , 2018, ICML.
[16] Alan L. Yuille,et al. Mitigating adversarial effects through randomization , 2017, ICLR.
[17] James A. Storer,et al. Deflecting Adversarial Attacks with Pixel Deflection , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[18] Matthias Hein,et al. Provable Robustness of ReLU networks via Maximization of Linear Regions , 2018, AISTATS.
[19] Stephan Günnemann,et al. Adversarial Attacks on Neural Networks for Graph Data , 2018, KDD.