暂无分享,去创建一个
[1] R. Keener. Theoretical Statistics: Topics for a Core Course , 2010 .
[2] Michael W. Mahoney,et al. Traditional and Heavy-Tailed Self Regularization in Neural Network Models , 2019, ICML.
[3] Ji Zhu,et al. Margin Maximizing Loss Functions , 2003, NIPS.
[4] H. Vincent Poor,et al. An Introduction to Signal Detection and Estimation , 1994, Springer Texts in Electrical Engineering.
[5] Matthias Bethge,et al. Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models , 2017, ArXiv.
[6] P. Olver. Nonlinear Systems , 2013 .
[7] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[8] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[9] Qiang Liu,et al. On the Margin Theory of Feedforward Neural Networks , 2018, ArXiv.
[10] S. Sastry. Nonlinear Systems: Analysis, Stability, and Control , 1999 .
[11] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[12] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[13] Nathan Srebro,et al. The Implicit Bias of Gradient Descent on Separable Data , 2017, J. Mach. Learn. Res..
[14] Eliza Strickland,et al. IBM Watson, heal thyself: How IBM overpromised and underdelivered on AI health care , 2019, IEEE Spectrum.
[15] Pravin Varaiya,et al. Stochastic Systems: Estimation, Identification, and Adaptive Control , 1986 .
[16] Nathan Srebro,et al. The Marginal Value of Adaptive Gradient Methods in Machine Learning , 2017, NIPS.
[17] Jorge Nocedal,et al. Optimization Methods for Large-Scale Machine Learning , 2016, SIAM Rev..
[18] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[19] Sébastien Bubeck,et al. Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems , 2012, Found. Trends Mach. Learn..
[20] Peter L. Bartlett,et al. The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important than the Size of the Network , 1998, IEEE Trans. Inf. Theory.
[21] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[22] S. Shankar Sastry,et al. Step Size Matters in Deep Learning , 2018, NeurIPS.
[23] Robert Tibshirani,et al. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd Edition , 2001, Springer Series in Statistics.
[24] S. Sastry,et al. Adaptive Control: Stability, Convergence and Robustness , 1989 .
[25] Shai Ben-David,et al. Understanding Machine Learning: From Theory to Algorithms , 2014 .
[26] Matus Telgarsky,et al. Risk and parameter convergence of logistic regression , 2018, ArXiv.
[27] Stephen P. Boyd,et al. Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.
[28] C. Desoer,et al. Linear System Theory , 1963 .