暂无分享,去创建一个
[1] W. Hoeffding. Probability Inequalities for sums of Bounded Random Variables , 1963 .
[2] G. Pisier. Martingales with values in uniformly convex spaces , 1975 .
[3] D. Ruppert,et al. Efficient Estimations from a Slowly Convergent Robbins-Monro Process , 1988 .
[4] Boris Polyak,et al. Acceleration of stochastic approximation by averaging , 1992 .
[5] Marc Teboulle,et al. Mirror descent and nonlinear projected subgradient methods for convex optimization , 2003, Oper. Res. Lett..
[6] Yurii Nesterov,et al. Introductory Lectures on Convex Optimization - A Basic Course , 2014, Applied Optimization.
[7] H. Hanche-Olsen. On the uniform convexity of L^p , 2005, math/0502021.
[8] A. Juditsky,et al. Solving variational inequalities with Stochastic Mirror-Prox algorithm , 2008, 0809.0815.
[9] Alexander Shapiro,et al. Stochastic Approximation approach to Stochastic Programming , 2013 .
[10] Lutz Dümbgen,et al. Nemirovski's Inequalities Revisited , 2008, Am. Math. Mon..
[11] Ambuj Tewari,et al. Composite objective mirror descent , 2010, COLT 2010.
[12] Elad Hazan,et al. An optimal algorithm for stochastic strongly-convex optimization , 2010, 1006.2425.
[13] Eric Moulines,et al. Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning , 2011, NIPS.
[14] Ambuj Tewari,et al. On the Universality of Online Mirror Descent , 2011, NIPS.
[15] Guanghui Lan,et al. An optimal method for stochastic composite optimization , 2011, Mathematical Programming.
[16] Mark W. Schmidt,et al. A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets , 2012, NIPS.
[17] Tong Zhang,et al. Proximal Stochastic Dual Coordinate Ascent , 2012, ArXiv.
[18] Ohad Shamir,et al. Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization , 2011, ICML.
[19] Martin J. Wainwright,et al. Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization , 2010, IEEE Transactions on Information Theory.
[20] Eric Moulines,et al. Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n) , 2013, NIPS.
[21] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.
[22] Francis Bach,et al. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.
[23] Alexander J. Smola,et al. Efficient mini-batch training for stochastic optimization , 2014, KDD.
[24] A. Juditsky,et al. Deterministic and Stochastic Primal-Dual Subgradient Algorithms for Uniformly Convex Minimization , 2014 .
[25] Lin Xiao,et al. An Accelerated Proximal Coordinate Gradient Method , 2014, NIPS.
[26] Francis R. Bach,et al. From Averaging to Acceleration, There is Only a Step-size , 2015, COLT.
[27] Léon Bottou,et al. A Lower Bound for the Optimization of Finite Sums , 2014, ICML.
[28] Yurii Nesterov,et al. Universal gradient methods for convex optimization problems , 2015, Math. Program..
[29] Zaïd Harchaoui,et al. A Universal Catalyst for First-Order Optimization , 2015, NIPS.
[30] Nathan Srebro,et al. Tight Complexity Bounds for Optimizing Composite Objectives , 2016, NIPS.
[31] Zeyuan Allen Zhu,et al. Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives , 2015, ICML.
[32] Zeyuan Allen Zhu,et al. Optimal Black-Box Reductions Between Optimization Objectives , 2016, NIPS.
[33] Atsushi Nitanda,et al. Accelerated Stochastic Gradient Descent for Minimizing Finite Sums , 2015, AISTATS.
[34] Alexander J. Smola,et al. Stochastic Variance Reduction for Nonconvex Optimization , 2016, ICML.
[35] Tong Zhang,et al. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization , 2013, Mathematical Programming.
[36] Zeyuan Allen-Zhu,et al. Katyusha: the first direct acceleration of stochastic gradient methods , 2016, J. Mach. Learn. Res..
[37] Michael I. Jordan,et al. Non-convex Finite-Sum Optimization Via SCSG Methods , 2017, NIPS.
[38] Yossi Arjevani,et al. Limitations on Variance-Reduction and Acceleration Schemes for Finite Sums Optimization , 2017, NIPS.
[39] Francis R. Bach,et al. Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression , 2016, J. Mach. Learn. Res..
[40] Tianbao Yang,et al. Adaptive SVRG Methods under Error Bound Conditions with Unknown Growth Parameter , 2017, NIPS.
[41] Zeyuan Allen Zhu,et al. Katyusha: the first direct acceleration of stochastic gradient methods , 2017, STOC.
[42] Michael I. Jordan,et al. Less than a Single Pass: Stochastically Controlled Stochastic Gradient , 2016, AISTATS.
[43] Yi Zhou,et al. An optimal randomized incremental gradient method , 2015, Mathematical Programming.