StopWasting My Gradients: Practical SVRG

We present and analyze several strategies for improving the performance of stochastic variance-reduced gradient (SVRG) methods. We first show that the convergence rate of these methods can be preserved under a decreasing sequence of errors in the control variate, and use this to derive variants of SVRG that use growing-batch strategies to reduce the number of gradient calculations required in the early iterations. We further (i) show how to exploit support vectors to reduce the number of gradient computations in the later iterations, (ii) prove that the commonly-used regularized SVRG iteration is justified and improves the convergence rate, (iii) consider alternate mini-batch selection strategies, and (iv) consider the generalization error of the method.

[1]  Peter Richtárik,et al.  Semi-Stochastic Gradient Descent Methods , 2013, Front. Appl. Math. Stat..

[2]  Thorsten Joachims,et al.  Making large scale SVM learning practical , 1998 .

[3]  S. Rosset,et al.  Piecewise linear regularized solution paths , 2007, 0708.2197.

[4]  Uri M. Ascher,et al.  Adaptive and Stochastic Algorithms for Electrical Impedance Tomography and DC Resistivity Problems with Piecewise Constant Solutions and Many Measurements , 2012, SIAM J. Sci. Comput..

[5]  Mark W. Schmidt,et al.  Hybrid Deterministic-Stochastic Methods for Data Fitting , 2011, SIAM J. Sci. Comput..

[6]  Jie Liu,et al.  Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting , 2015, IEEE Journal of Selected Topics in Signal Processing.

[7]  Léon Bottou,et al.  The Tradeoffs of Large Scale Learning , 2007, NIPS.

[8]  Felix J. Herrmann,et al.  Robust inversion, dimensionality reduction, and randomized sampling , 2012, Math. Program..

[9]  Yurii Nesterov,et al.  Introductory Lectures on Convex Optimization - A Basic Course , 2014, Applied Optimization.

[10]  James T. Kwok,et al.  Accelerated Gradient Methods for Stochastic Optimization and Online Learning , 2009, NIPS.

[11]  Andreas Krause,et al.  Advances in Neural Information Processing Systems (NIPS) , 2014 .

[12]  Antoine Bordes,et al.  Guarantees for Approximate Incremental SVMs , 2010, AISTATS.

[13]  Francis Bach,et al.  SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.

[14]  Gordon V. Cormack,et al.  Spam Corpus Creation for TREC , 2005, CEAS.

[15]  U. Ascher,et al.  Adaptive and stochastic algorithms for EIT and DC resistivity problems with piecewise constant solutions and many measurements , 2011 .

[16]  Léon Bottou,et al.  Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.

[17]  Jorge Nocedal,et al.  Sample size selection in optimization methods for machine learning , 2012, Math. Program..

[18]  Tong Zhang,et al.  Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.

[19]  Rong Jin,et al.  Mixed Optimization for Smooth Functions , 2013, NIPS.

[20]  Lin Xiao,et al.  A Proximal Stochastic Gradient Method with Progressive Variance Reduction , 2014, SIAM J. Optim..

[21]  Thorsten Joachims,et al.  KDD-Cup 2004: results and analysis , 2004, SKDD.

[22]  Sharon L. Lohr,et al.  Sampling: Design and Analysis , 1999 .

[23]  Mark W. Schmidt,et al.  A Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets , 2012, ArXiv.

[24]  Julien Mairal,et al.  Optimization with First-Order Surrogate Functions , 2013, ICML.

[25]  Peter Carbonetto,et al.  New probabilistic inference algorithms that harness the strengths of variational and Monte Carlo methods , 2009 .

[26]  Yiming Yang,et al.  RCV1: A New Benchmark Collection for Text Categorization Research , 2004, J. Mach. Learn. Res..

[27]  Rong Jin,et al.  Linear Convergence with Condition Number Independent Access of Full Gradients , 2013, NIPS.

[28]  Shai Shalev-Shwartz,et al.  Stochastic dual coordinate ascent methods for regularized loss , 2012, J. Mach. Learn. Res..

[29]  S. Sathiya Keerthi,et al.  A Modified Finite Newton Method for Fast Solution of Large Scale Linear SVMs , 2005, J. Mach. Learn. Res..

[30]  Mark W. Schmidt,et al.  Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization , 2011, NIPS.