Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees
暂无分享,去创建一个
George J. Pappas | Victor M. Preciado | Mahyar Fazlyab | Jacob H. Seidman | Mahyar Fazlyab | V. Preciado
[1] Qianxiao Li,et al. An Optimal Control Approach to Deep Learning and Applications to Discrete-Weight Neural Networks , 2018, ICML.
[2] T. Basar,et al. H∞-0ptimal Control and Related Minimax Design Problems: A Dynamic Game Approach , 1996, IEEE Trans. Autom. Control..
[3] Long Chen,et al. Maximum Principle Based Algorithms for Deep Learning , 2017, J. Mach. Learn. Res..
[4] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[5] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[6] Y. Nesterov,et al. First-order methods with inexact oracle: the strongly convex case , 2013 .
[7] Cho-Jui Hsieh,et al. Convergence of Adversarial Training in Overparametrized Neural Networks , 2019, NeurIPS.
[8] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[9] E Weinan,et al. A mean-field optimal control formulation of deep learning , 2018, Research in the Mathematical Sciences.
[10] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[11] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[12] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[13] Stephen P. Boyd,et al. Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.
[14] Bin Dong,et al. You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle , 2019, NeurIPS.
[15] Jaeho Lee,et al. Minimax Statistical Learning with Wasserstein distances , 2017, NeurIPS.
[16] F. Chernousko,et al. Method of successive approximations for solution of optimal control problems , 2007 .
[17] E Weinan,et al. A Proposal on Machine Learning via Dynamical Systems , 2017, Communications in Mathematics and Statistics.
[18] Saeed Ghadimi,et al. Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming , 2013, SIAM J. Optim..
[19] James Bailey,et al. On the Convergence and Robustness of Adversarial Training , 2021, ICML.
[20] John C. Duchi,et al. Certifiable Distributional Robustness with Principled Adversarial Training , 2017, ArXiv.