暂无分享,去创建一个
[1] Atsushi Nitanda,et al. Stochastic Proximal Gradient Descent with Acceleration Techniques , 2014, NIPS.
[2] Inderjit S. Dhillon,et al. PASSCoDe: Parallel ASynchronous Stochastic dual Co-ordinate Descent , 2015, ICML.
[3] Julien Mairal,et al. Optimization with First-Order Surrogate Functions , 2013, ICML.
[4] John Langford,et al. Slow Learners are Fast , 2009, NIPS.
[5] Stephen J. Wright,et al. An asynchronous parallel stochastic coordinate descent algorithm , 2013, J. Mach. Learn. Res..
[6] Martin J. Wainwright,et al. Distributed Dual Averaging In Networks , 2010, NIPS.
[7] Yoram Singer,et al. Efficient Online and Batch Learning Using Forward Backward Splitting , 2009, J. Mach. Learn. Res..
[8] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.
[9] Mark W. Schmidt,et al. A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets , 2012, NIPS.
[10] Shai Shalev-Shwartz,et al. Stochastic dual coordinate ascent methods for regularized loss , 2012, J. Mach. Learn. Res..
[11] Alexander J. Smola,et al. Parallelized Stochastic Gradient Descent , 2010, NIPS.
[12] Stephen J. Wright,et al. Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.
[13] Lin Xiao,et al. Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization , 2009, J. Mach. Learn. Res..