Multi-way Encoding for Robustness
暂无分享,去创建一个
Stan Sclaroff | Donghyun Kim | Sarah Adel Bargal | Sarah Adel Bargal | Jianming Zhang | S. Sclaroff | Jianming Zhang | Donghyun Kim
[1] Huichen Lihuichen. DECISION-BASED ADVERSARIAL ATTACKS: RELIABLE ATTACKS AGAINST BLACK-BOX MACHINE LEARNING MODELS , 2017 .
[2] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[3] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[4] Kenneth W. Shum,et al. Deep Representation Learning with Target Coding , 2015, AAAI.
[5] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[6] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[7] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[8] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[9] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[10] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[11] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[12] Colin Raffel,et al. Thermometer Encoding: One Hot Way To Resist Adversarial Examples , 2018, ICLR.
[13] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[14] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[15] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[16] Tong Zhang,et al. NATTACK: A STRONG AND UNIVERSAL GAUSSIAN BLACK-BOX ADVERSARIAL ATTACK , 2018 .
[17] Tong Zhang,et al. NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks , 2019, ICML.
[18] Nic Ford,et al. Adversarial Examples Are a Natural Consequence of Test Error in Noise , 2019, ICML.
[19] Matthias Bethge,et al. Towards the first adversarially robust neural network model on MNIST , 2018, ICLR.
[20] Alan L. Yuille,et al. Mitigating adversarial effects through randomization , 2017, ICLR.
[21] Harini Kannan,et al. Adversarial Logit Pairing , 2018, NIPS 2018.
[22] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[23] Logan Engstrom,et al. Evaluating and Understanding the Robustness of Adversarial Logit Pairing , 2018, ArXiv.
[24] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[25] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[26] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[27] Adversarial Examples. THERMOMETER ENCODING: ONE HOT WAY TO RESIST , 2017 .
[28] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Hui Wu,et al. Protecting Intellectual Property of Deep Neural Networks with Watermarking , 2018, AsiaCCS.
[30] Xiaolin Hu,et al. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[31] David A. Wagner,et al. Defensive Distillation is Not Robust to Adversarial Examples , 2016, ArXiv.
[32] Sergio Escalera,et al. Beyond One-hot Encoding: lower dimensional target embedding , 2018, Image Vis. Comput..
[33] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.