暂无分享,去创建一个
Kamyar Azizzadenesheli | Anima Anandkumar | Jean Kossaifi | Zachary C. Lipton | Jeremy Bernstein | Aran Khanna | Zachary Chase Lipton | Guneet S. Dhillon | Guneet Singh Dhillon | Jeremy Bernstein | K. Azizzadenesheli | Anima Anandkumar | Jean Kossaifi | A. Khanna | Jeremy R. Bernstein | Aran Khanna
[1] Ariel Rubinstein,et al. A Course in Game Theory , 1995 .
[2] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[3] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[4] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[5] Zheng Zhang,et al. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems , 2015, ArXiv.
[6] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[7] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[8] Marc G. Bellemare,et al. The Arcade Learning Environment: An Evaluation Platform for General Agents , 2012, J. Artif. Intell. Res..
[9] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[11] Patrick D. McDaniel,et al. On the Effectiveness of Defensive Distillation , 2016, ArXiv.
[12] Arslan Munir,et al. Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks , 2017, MLDM.
[13] Dan Boneh,et al. The Space of Transferable Adversarial Examples , 2017, ArXiv.
[14] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[15] Surya Ganguli,et al. Biologically inspired protection of deep networks from adversarial attacks , 2017, ArXiv.
[16] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[17] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[18] Dawn Xiaodong Song,et al. Delving into adversarial attacks on deep policies , 2017, ICLR.
[19] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[20] Pascal Frossard,et al. Analysis of classifiers’ robustness to adversarial perturbations , 2015, Machine Learning.
[21] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[22] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[23] Kamyar Azizzadenesheli,et al. signSGD: compressed optimisation for non-convex problems , 2018, ICML.