On the Powerball Method for Optimization
暂无分享,去创建一个
[1] Stephen J. Wright,et al. Numerical Optimization , 2018, Fundamental Statistical Inference.
[2] Alexandre M. Bayen,et al. Accelerated Mirror Descent in Continuous and Discrete Time , 2015, NIPS.
[3] S. Sastry. Nonlinear Systems: Analysis, Stability, and Control , 1999 .
[4] Yurii Nesterov,et al. Introductory Lectures on Convex Optimization - A Basic Course , 2014, Applied Optimization.
[5] Michael I. Jordan,et al. Gradient Descent Only Converges to Minimizers , 2016, COLT.
[6] Jorge Nocedal,et al. On the limited memory BFGS method for large scale optimization , 1989, Math. Program..
[7] Stephen P. Boyd,et al. A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights , 2014, J. Mach. Learn. Res..
[8] Dong Yu,et al. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs , 2014, INTERSPEECH.
[9] Sergey Brin,et al. The Anatomy of a Large-Scale Hypertextual Web Search Engine , 1998, Comput. Networks.
[10] Stephen P. Boyd,et al. Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.
[11] Benjamin Recht,et al. Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints , 2014, SIAM J. Optim..
[12] Steven H. Strogatz,et al. Nonlinear Dynamics and Chaos , 2024 .
[13] Boris Polyak. Some methods of speeding up the convergence of iteration methods , 1964 .
[14] S. Bhat,et al. Finite-time stability of homogeneous systems , 1997, Proceedings of the 1997 American Control Conference (Cat. No.97CH36041).