IS-ASGD: Accelerating Asynchronous SGD using Importance Sampling
暂无分享,去创建一个
[1] Dimitris S. Papailiopoulos,et al. Perturbed Iterate Analysis for Asynchronous Stochastic Optimization , 2015, SIAM J. Optim..
[2] Michael I. Jordan,et al. Estimation, Optimization, and Parallelism when Data is Sparse , 2013, NIPS.
[3] Heng Huang,et al. Asynchronous Mini-Batch Gradient Descent with Variance Reduction for Non-Convex Optimization , 2017, AAAI.
[4] Tie-Yan Liu,et al. Asynchronous Stochastic Proximal Optimization Algorithms with Variance Reduction , 2016, AAAI.
[5] Stephen J. Wright,et al. Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.
[6] Wu-Jun Li,et al. Fast Asynchronous Parallel Stochastic Gradient Decent , 2015, ArXiv.
[7] Yuanyuan Liu,et al. Accelerated Variance Reduced Stochastic ADMM , 2017, AAAI.
[8] R. Vershynin,et al. A Randomized Kaczmarz Algorithm with Exponential Convergence , 2007, math/0702226.
[9] Zhouchen Lin,et al. Parallel Asynchronous Stochastic Variance Reduction for Nonconvex Optimization , 2017, AAAI.
[10] Deanna Needell,et al. Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm , 2013, Mathematical Programming.
[11] Alexander J. Smola,et al. On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants , 2015, NIPS.
[12] Tong Zhang,et al. Stochastic Optimization with Importance Sampling for Regularized Loss Minimization , 2014, ICML.
[13] Francis Bach,et al. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.
[14] Wu-Jun Li,et al. Fast Asynchronous Parallel Stochastic Gradient Descent: A Lock-Free Approach with Convergence Guarantee , 2016, AAAI.
[15] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.
[16] Peter Richtárik,et al. Importance Sampling for Minibatches , 2016, J. Mach. Learn. Res..