The Complexity of Making the Gradient Small in Stochastic Convex Optimization
暂无分享,去创建一个
Ohad Shamir | Nathan Srebro | Karthik Sridharan | Dylan J. Foster | Ayush Sekhari | Blake E. Woodworth | Nathan Srebro | O. Shamir | Karthik Sridharan | Ayush Sekhari | N. Srebro
[1] John Darzentas,et al. Problem Complexity and Method Efficiency in Optimization , 1983 .
[2] Eli Upfal,et al. Computing with Noisy Information , 1994, SIAM J. Comput..
[3] J. Slack. How to make the gradient , 1994, Nature.
[4] Yurii Nesterov,et al. Introductory Lectures on Convex Optimization - A Basic Course , 2014, Applied Optimization.
[5] Richard M. Karp,et al. Noisy binary search and its applications , 2007, SODA '07.
[6] Martin J. Wainwright,et al. Information-theoretic lower bounds on the oracle complexity of convex optimization , 2009, NIPS.
[7] Ohad Shamir,et al. Stochastic Convex Optimization , 2009, COLT.
[8] Saeed Ghadimi,et al. Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework , 2012, SIAM J. Optim..
[9] Ohad Shamir,et al. Optimal Distributed Online Prediction Using Mini-Batches , 2010, J. Mach. Learn. Res..
[10] Saeed Ghadimi,et al. Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming , 2013, SIAM J. Optim..
[11] Nathan Srebro,et al. Tight Complexity Bounds for Optimizing Composite Objectives , 2016, NIPS.
[12] Saeed Ghadimi,et al. Accelerated gradient methods for nonconvex nonlinear and stochastic programming , 2013, Mathematical Programming.
[13] Alexander J. Smola,et al. Stochastic Variance Reduction for Nonconvex Optimization , 2016, ICML.
[14] Zeyuan Allen-Zhu,et al. Katyusha: the first direct acceleration of stochastic gradient methods , 2016, J. Mach. Learn. Res..
[15] Sebastian Pokutta,et al. Lower Bounds on the Oracle Complexity of Nonsmooth Convex Optimization via Information Theory , 2014, IEEE Transactions on Information Theory.
[16] Michael I. Jordan,et al. Non-convex Finite-Sum Optimization Via SCSG Methods , 2017, NIPS.
[17] Michael I. Jordan,et al. How to Escape Saddle Points Efficiently , 2017, ICML.
[18] Tong Zhang,et al. SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator , 2018, NeurIPS.
[19] Nathan Srebro,et al. Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization , 2018, NeurIPS.
[20] Quanquan Gu,et al. Stochastic Nested Variance Reduced Gradient Descent for Nonconvex Optimization , 2018, NeurIPS.
[21] Damek Davis,et al. Complexity of finding near-stationary points of convex functions stochastically , 2018, 1802.08556.
[22] Zeyuan Allen-Zhu,et al. How To Make the Gradients Small Stochastically: Even Faster Convex and Nonconvex SGD , 2018, NeurIPS.
[23] Yair Carmon,et al. Lower bounds for finding stationary points I , 2017, Mathematical Programming.
[24] Yair Carmon,et al. Lower bounds for finding stationary points II: first-order methods , 2017, Mathematical Programming.