Delay-Tolerant Online Convex Optimization: Unified Analysis and Adaptive-Gradient Algorithms
暂无分享,去创建一个
[1] Stephen J. Wright,et al. An asynchronous parallel stochastic coordinate descent algorithm , 2013, J. Mach. Learn. Res..
[2] Dimitris S. Papailiopoulos,et al. Perturbed Iterate Analysis for Asynchronous Stochastic Optimization , 2015, SIAM J. Optim..
[3] Yurii Nesterov,et al. Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems , 2012, SIAM J. Optim..
[4] Stephen J. Wright,et al. Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties , 2014, SIAM J. Optim..
[5] Matthew J. Streeter,et al. Adaptive Bound Optimization for Online Convex Optimization , 2010, COLT 2010.
[6] John C. Duchi,et al. Distributed delayed stochastic optimization , 2011, 2012 IEEE 51st IEEE Conference on Decision and Control (CDC).
[7] Matthew J. Streeter,et al. Delay-Tolerant Algorithms for Asynchronous Distributed Online Learning , 2014, NIPS.
[8] Peter L. Bartlett,et al. Implicit Online Learning , 2010, ICML.
[9] Haym Hirsh,et al. Improving on-line learning , 2007 .
[10] Alexander J. Smola,et al. AdaDelay: Delay Adaptive Distributed Stochastic Convex Optimization , 2015, ArXiv.
[11] Gábor Lugosi,et al. Prediction, learning, and games , 2006 .
[12] Erik Ordentlich,et al. On delayed prediction of individual sequences , 2002, IEEE Trans. Inf. Theory.
[13] H. Brendan McMahan,et al. Analysis Techniques for Adaptive Online Learning , 2014, ArXiv.
[14] Stephen J. Wright,et al. Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.
[15] Michael I. Jordan,et al. Estimation, Optimization, and Parallelism when Data is Sparse , 2013, NIPS.
[16] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[17] András György,et al. Online Learning under Delayed Feedback , 2013, ICML.