Leverage Score Sampling for Faster Accelerated Regression and ERM
暂无分享,去创建一个
Yin Tat Lee | Sham M. Kakade | Praneeth Netrapalli | Naman Agarwal | Aaron Sidford | Rahul Kidambi | S. Kakade | Praneeth Netrapalli | Aaron Sidford | Rahul Kidambi | Naman Agarwal | Y. Lee
[1] Gary L. Miller,et al. Iterative Row Sampling , 2012, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.
[2] Shai Shalev-Shwartz,et al. Stochastic dual coordinate ascent methods for regularized loss , 2012, J. Mach. Learn. Res..
[3] Sham M. Kakade,et al. Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization , 2015, ICML.
[4] Richard Peng,et al. Uniform Sampling for Matrix Approximation , 2014, ITCS.
[5] Michael B. Cohen,et al. Nearly Tight Oblivious Subspace Embeddings by Trace Inequalities , 2016, SODA.
[6] Nathan Srebro,et al. Tight Complexity Bounds for Optimizing Composite Objectives , 2016, NIPS.
[7] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.
[8] Ambuj Tewari,et al. Applications of strong convexity--strong smoothness duality to learning with matrices , 2009, ArXiv.
[9] David P. Woodruff,et al. Low rank approximation and regression in input sparsity time , 2013, STOC '13.
[10] Huy L. Nguyen,et al. OSNAP: Faster Numerical Linear Algebra Algorithms via Sparser Subspace Embeddings , 2012, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.
[11] Mark W. Schmidt,et al. A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets , 2012, NIPS.
[12] Yurii Nesterov,et al. Efficiency of the Accelerated Coordinate Descent Method on Structured Optimization Problems , 2017, SIAM J. Optim..
[13] Sham M. Kakade,et al. Competing with the Empirical Risk Minimizer in a Single Pass , 2014, COLT.
[14] Zeyuan Allen Zhu,et al. Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling , 2015, ICML.
[15] Nikhil Srivastava,et al. Graph sparsification by effective resistances , 2008, SIAM J. Comput..
[16] Virginia Vassilevska Williams,et al. Multiplying matrices faster than coppersmith-winograd , 2012, STOC '12.
[17] Zeyuan Allen Zhu,et al. Katyusha: Accelerated Variance Reduction for Faster SGD , 2016, ArXiv.
[18] Zaïd Harchaoui,et al. A Universal Catalyst for First-Order Optimization , 2015, NIPS.
[19] Francis Bach,et al. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.
[20] Y. Nesterov. A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .
[21] Tong Zhang,et al. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization , 2013, Math. Program..