暂无分享,去创建一个
[1] Sham M. Kakade,et al. Competing with the Empirical Risk Minimizer in a Single Pass , 2014, COLT.
[2] Ashok Cutkosky,et al. Online Learning Without Prior Information , 2017, COLT.
[3] Ohad Shamir,et al. Communication-Efficient Distributed Optimization using an Approximate Newton-type Method , 2013, ICML.
[4] John Langford,et al. Normalized Online Learning , 2013, UAI.
[5] John Langford,et al. A reliable effective terascale linear learning system , 2011, J. Mach. Learn. Res..
[6] Francesco Orabona,et al. Simultaneous Model Selection and Optimization through Parameter-free Stochastic Learning , 2014, NIPS.
[7] Shai Shalev-Shwartz,et al. Online Learning and Online Convex Optimization , 2012, Found. Trends Mach. Learn..
[8] Michael I. Jordan,et al. Less than a Single Pass: Stochastically Controlled Stochastic Gradient , 2016, AISTATS.
[9] Ohad Shamir,et al. Optimal Distributed Online Prediction , 2011, ICML.
[10] Tianbao Yang,et al. Distributed Stochastic Variance Reduced Gradient Methods and A Lower Bound for Communication Complexity , 2015 .
[11] Yuchen Zhang,et al. Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss , 2015, ArXiv.
[12] Erick Cantú-Paz,et al. Personalized click prediction in sponsored search , 2010, WSDM '10.
[13] Francesco Orabona,et al. Scale-free online learning , 2016, Theor. Comput. Sci..
[14] Claudio Gentile,et al. On the generalization ability of on-line learning algorithms , 2001, IEEE Transactions on Information Theory.
[15] Anastasios Kyrillidis,et al. Trading-off variance and complexity in stochastic gradient descent , 2016, ArXiv.
[16] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[17] Nathan Srebro,et al. Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch Prox , 2017, COLT.
[18] Alexander J. Smola,et al. AIDE: Fast and Communication Efficient Distributed Optimization , 2016, ArXiv.
[19] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.
[20] Ohad Shamir,et al. Better Mini-Batch Algorithms via Accelerated Gradient Methods , 2011, NIPS.