暂无分享,去创建一个
[1] Cho-Jui Hsieh,et al. Convergence of Adversarial Training in Overparametrized Neural Networks , 2019, NeurIPS.
[2] Dan Boneh,et al. Adversarial Training and Robustness for Multiple Perturbations , 2019, NeurIPS.
[3] Saeed Mahloujifar,et al. The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure , 2018, AAAI.
[4] Yuan Cao,et al. Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks , 2019, NeurIPS.
[5] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[6] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[7] Seyed-Mohsen Moosavi-Dezfooli,et al. Robustness of classifiers: from adversarial to random noise , 2016, NIPS.
[8] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[9] Jaehoon Lee,et al. Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes , 2018, ICLR.
[10] Greg Yang,et al. Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation , 2019, ArXiv.
[11] Julien Mairal,et al. Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations , 2017, J. Mach. Learn. Res..
[12] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Aleksander Madry,et al. Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability , 2018, ICLR.
[14] Alexandros G. Dimakis,et al. Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes , 2019, NeurIPS.
[15] Matthias Bethge,et al. Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models , 2017, ArXiv.
[16] Nic Ford,et al. Adversarial Examples Are a Natural Consequence of Test Error in Noise , 2019, ICML.
[17] Matthias Bethge,et al. Towards the first adversarially robust neural network model on MNIST , 2018, ICLR.
[18] Suman Jana,et al. Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[19] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[20] Luca Cardelli,et al. Robustness Guarantees for Bayesian Inference with Gaussian Processes , 2019, AAAI.
[21] Pablo Mesejo,et al. Understanding Priors in Bayesian Neural Networks at the Unit Level , 2018, ICML.
[22] Lawrence Carin,et al. Second-Order Adversarial Attack and Certifiable Robustness , 2018, ArXiv.
[23] Lewis D. Griffin,et al. A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples , 2016, ArXiv.
[24] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[25] Greg Yang,et al. Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes , 2019, NeurIPS.
[26] Stéphane Mallat,et al. Group Invariant Scattering , 2011, ArXiv.
[27] Richard E. Turner,et al. Gaussian Process Behaviour in Wide Deep Neural Networks , 2018, ICLR.
[28] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[29] Preetum Nakkiran,et al. Adversarial Robustness May Be at Odds With Simplicity , 2019, ArXiv.
[30] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.
[31] Stéphane Mallat,et al. Invariant Scattering Convolution Networks , 2012, IEEE transactions on pattern analysis and machine intelligence.
[32] Jascha Sohl-Dickstein,et al. Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10, 000-Layer Vanilla Convolutional Neural Networks , 2018, ICML.
[33] M. Talagrand,et al. Probability in Banach Spaces: Isoperimetry and Processes , 1991 .
[34] Thomas G. Dietterich. Adaptive computation and machine learning , 1998 .
[35] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[36] Jiaoyang Huang,et al. Dynamics of Deep Neural Networks and Neural Tangent Hierarchy , 2019, ICML.
[37] Yanzhi Wang,et al. An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks , 2018, ACM Multimedia.
[38] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[39] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[40] Jaehoon Lee,et al. Deep Neural Networks as Gaussian Processes , 2017, ICLR.
[41] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[42] Yann LeCun,et al. The Loss Surfaces of Multilayer Networks , 2014, AISTATS.
[43] Ruosong Wang,et al. Enhanced Convolutional Neural Tangent Kernels , 2019, ArXiv.
[44] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[45] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[46] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[47] Adi Shamir,et al. A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance , 2019, ArXiv.
[48] Hamza Fawzi,et al. Adversarial vulnerability for any classifier , 2018, NeurIPS.
[49] Seth Lloyd,et al. Deep neural networks are biased towards simple functions , 2018, ArXiv.
[50] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[51] W. Brendel,et al. Foolbox: A Python toolbox to benchmark the robustness of machine learning models , 2017 .
[52] R. Adler,et al. Random Fields and Geometry , 2007 .
[53] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[54] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[55] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[56] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[57] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[58] Colin Wei,et al. Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel , 2018, NeurIPS.
[59] Surya Ganguli,et al. Deep Information Propagation , 2016, ICLR.
[60] Luiz Eduardo Soares de Oliveira,et al. Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[61] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[62] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[63] Tom Goldstein,et al. Are adversarial examples inevitable? , 2018, ICLR.
[64] Surya Ganguli,et al. Exponential expressivity in deep neural networks through transient chaos , 2016, NIPS.
[65] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[66] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[67] Arthur Jacot,et al. Neural tangent kernel: convergence and generalization in neural networks (invited paper) , 2018, NeurIPS.
[68] Carl E. Rasmussen,et al. Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.
[69] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[70] Stéphane Mallat,et al. Deep roto-translation scattering for object classification , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[71] Radford M. Neal. Priors for Infinite Networks , 1996 .
[72] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[73] Laurence Aitchison,et al. Deep Convolutional Networks as shallow Gaussian Processes , 2018, ICLR.
[74] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[75] Surya Ganguli,et al. The Emergence of Spectral Universality in Deep Networks , 2018, AISTATS.
[76] Julien Mairal,et al. On the Inductive Bias of Neural Tangent Kernels , 2019, NeurIPS.
[77] Christopher K. I. Williams. Computing with Infinite Networks , 1996, NIPS.
[78] Ruosong Wang,et al. On Exact Computation with an Infinitely Wide Neural Net , 2019, NeurIPS.
[79] Jaehoon Lee,et al. Wide neural networks of any depth evolve as linear models under gradient descent , 2019, NeurIPS.
[80] Ilya P. Razenshteyn,et al. Adversarial examples from computational constraints , 2018, ICML.
[81] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[82] Beomsu Kim,et al. Bridging Adversarial Robustness and Gradient Interpretability , 2019, ArXiv.
[83] Jürgen Schmidhuber,et al. Deep learning in neural networks: An overview , 2014, Neural Networks.