Barzilai-Borwein Step Size for Stochastic Gradient Descent
暂无分享,去创建一个
Shiqian Ma | Conghui Tan | Yuqiu Qian | Yu-Hong Dai | Shiqian Ma | Yuhong Dai | Conghui Tan | Yuqiu Qian
[1] J. Borwein,et al. Two-Point Step Size Gradient Methods , 1988 .
[2] Boris Polyak,et al. Acceleration of stochastic approximation by averaging , 1992 .
[3] M. Raydan. On the Barzilai and Borwein choice of steplength for the gradient method , 1993 .
[4] Bernard Delyon,et al. Accelerated Stochastic Approximation , 1993, SIAM J. Optim..
[5] Marcos Raydan,et al. The Barzilai and Borwein Gradient Method for the Large Scale Unconstrained Minimization Problem , 1997, SIAM J. Optim..
[6] L. Liao,et al. R-linear convergence of the Barzilai and Borwein gradient method , 2002 .
[7] Roger Fletcher,et al. Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming , 2005, Numerische Mathematik.
[8] Roger Fletcher,et al. On the Barzilai-Borwein Method , 2005 .
[9] W. Hager,et al. The cyclic Barzilai-–Borwein method for unconstrained optimization , 2006 .
[10] Shiqian Ma,et al. Projected Barzilai–Borwein method for large-scale nonnegative image restoration , 2007 .
[11] Simon Günter,et al. A Stochastic Quasi-Newton Method for Online Convex Optimization , 2007, AISTATS.
[12] Stephen J. Wright,et al. Sparse Reconstruction by Separable Approximation , 2008, IEEE Transactions on Signal Processing.
[13] Yin Zhang,et al. A Fast Algorithm for Sparse Reconstruction Based on Shrinkage, Subspace Optimization, and Continuation , 2010, SIAM J. Sci. Comput..
[14] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[15] Mark W. Schmidt,et al. A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets , 2012, NIPS.
[16] Shai Shalev-Shwartz,et al. Stochastic dual coordinate ascent methods for regularized loss , 2012, J. Mach. Learn. Res..
[17] Yuhong Dai. A New Analysis on the Barzilai-Borwein Gradient Method , 2013 .
[18] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.
[19] Deanna Needell,et al. Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm , 2013, Mathematical Programming.
[20] Francis Bach,et al. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.
[21] Lin Xiao,et al. A Proximal Stochastic Gradient Method with Progressive Variance Reduction , 2014, SIAM J. Optim..
[22] Atsushi Nitanda,et al. Stochastic Proximal Gradient Descent with Acceleration Techniques , 2014, NIPS.
[23] Krzysztof Sopyla,et al. Stochastic Gradient Descent with Barzilai-Borwein update step for SVM , 2015, Inf. Sci..
[24] Yann Ollivier,et al. Speed learning on the fly , 2015, ArXiv.
[25] Tong Zhang,et al. Stochastic Optimization with Importance Sampling for Regularized Loss Minimization , 2014, ICML.
[26] Philipp Hennig,et al. Probabilistic Line Searches for Stochastic Optimization , 2015, NIPS.
[27] Mark W. Schmidt,et al. StopWasting My Gradients: Practical SVRG , 2015, NIPS.
[28] Zeyuan Allen Zhu,et al. Variance Reduction for Faster Non-Convex Optimization , 2016, ICML.
[29] Alexander J. Smola,et al. Stochastic Variance Reduction for Nonconvex Optimization , 2016, ICML.
[30] Peter Richtárik,et al. Semi-Stochastic Gradient Descent Methods , 2013, Front. Appl. Math. Stat..