Training Robust Neural Networks Using Lipschitz Bounds
暂无分享,去创建一个
Frank Allgöwer | Julian Berberich | Patricia Pauli | Anne Koch | Paul Kohler | F. Allgöwer | P. Pauli | Julian Berberich | A. Koch | Paul Kohler | J. Berberich | Anne Koch | Patricia Pauli
[1] Bernhard Pfahringer,et al. Regularisation of neural networks by enforcing Lipschitz continuity , 2018, Machine Learning.
[2] Paul Rolland,et al. Lipschitz constant estimation of Neural Networks via sparse polynomial optimization , 2020, ICLR.
[3] Alexandros G. Dimakis,et al. Exactly Computing the Local Lipschitz Constant of ReLU Networks , 2020, NeurIPS.
[4] Patrick L. Combettes,et al. Lipschitz Certificates for Layered Network Structures Driven by Averaged Activation Operators , 2019, SIAM J. Math. Data Sci..
[5] P. Pauli,et al. Robust neural networks via Lipschitz regularization and enforced Lipschitz bounds , 2020 .
[6] Manfred Morari,et al. Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks , 2019, NeurIPS.
[7] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[8] Wotao Yin,et al. Global Convergence of ADMM in Nonconvex Nonsmooth Optimization , 2015, Journal of Scientific Computing.
[9] Kevin Scaman,et al. Lipschitz regularity of deep neural networks: analysis and efficient estimation , 2018, NeurIPS.
[10] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.
[11] Masashi Sugiyama,et al. Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks , 2018, NeurIPS.
[12] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[13] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[14] Matthias Hein,et al. Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation , 2017, NIPS.
[15] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[16] Michael I. Jordan,et al. Gradient Descent Only Converges to Minimizers , 2016, COLT.
[17] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[18] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[19] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[20] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[21] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[22] Stephen P. Boyd,et al. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..
[23] Jason Weston,et al. Natural Language Processing (Almost) from Scratch , 2011, J. Mach. Learn. Res..
[24] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[25] J. Lofberg,et al. YALMIP : a toolbox for modeling and optimization in MATLAB , 2004, 2004 IEEE International Conference on Robotics and Automation (IEEE Cat. No.04CH37508).
[26] Anders Krogh,et al. A Simple Weight Decay Can Improve Generalization , 1991, NIPS.