暂无分享,去创建一个
Geoffrey E. Hinton | Colin Raffel | Geoffrey Hinton | Yao Qin | Nicholas Frosst | Garrison Cottrell | G. Cottrell | Colin Raffel | Nicholas Frosst | Yao Qin
[1] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[2] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[3] Dina Katabi,et al. ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation , 2019, ICML.
[4] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[5] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[6] Geoffrey E. Hinton,et al. Dynamic Routing Between Capsules , 2017, NIPS.
[7] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[8] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[9] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[10] James Bailey,et al. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality , 2018, ICLR.
[11] Thomas Hofmann,et al. The Odds are Odd: A Statistical Test for Detecting Adversarial Examples , 2019, ICML.
[12] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[13] Kibok Lee,et al. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.
[14] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[15] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[16] Geoffrey E. Hinton,et al. Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions , 2019, ICLR.
[17] Jinfeng Yi,et al. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples , 2017, AAAI.
[18] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[19] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[20] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[21] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[22] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[23] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[24] Adversarial Examples. THERMOMETER ENCODING: ONE HOT WAY TO RESIST , 2017 .
[25] Radha Poovendran,et al. Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples , 2019, ArXiv.
[26] Colin Raffel,et al. Thermometer Encoding: One Hot Way To Resist Adversarial Examples , 2018, ICLR.
[27] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[28] David Berthelot,et al. Evaluation Methodology for Attacks Against Confidence Thresholding Models , 2018 .
[29] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.