Asynchronous parallel stochastic gradient descent: a numeric core for scalable distributed machine learning algorithms

The implementation of a vast majority of machine learning (ML) algorithms boils down to solving a numerical optimization problem. In this context, Stochastic Gradient Descent (SGD) methods have long proven to provide good results, both in terms of convergence and accuracy. Recently, several parallelization approaches have been proposed in order to scale SGD to solve very large ML problems. At their core, most of these approaches are following a MapReduce scheme. This paper presents a novel parallel updating algorithm for SGD, which utilizes the asynchronous single-sided communication paradigm. Compared to existing methods, Asynchronous Parallel Stochastic Gradient Descent (ASGD) provides faster convergence, at linear scalability and stable accuracy.

[1]  Stephen J. Wright,et al.  Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.

[2]  Mei-Chen Yeh,et al.  Fast Human Detection Using a Cascade of Histograms of Oriented Gradients , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[3]  Léon Bottou,et al.  The Tradeoffs of Large Scale Learning , 2007, NIPS.

[4]  Daniel Grünewald BQCD with GPI: A case study , 2012, 2012 International Conference on High Performance Computing & Simulation (HPCS).

[5]  Léon Bottou,et al.  Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.

[6]  C. Simmendinger,et al.  The GASPI API specification and its implementation GPI 2.0 , 2013 .

[7]  D. Sculley,et al.  Web-scale k-means clustering , 2010, WWW '10.

[8]  Simon Osindero,et al.  Dogwild! – Distributed Hogwild for CPU & GPU , 2014 .

[9]  Kunle Olukotun,et al.  Map-Reduce for Machine Learning on Multicore , 2006, NIPS.

[10]  Hillol Kargupta,et al.  Distributed Data Mining Bibliography , 2004 .

[11]  Quoc V. Le,et al.  On optimization methods for deep learning , 2011, ICML.

[12]  Marina Meila,et al.  The uniqueness of a good optimum for K-means , 2006, ICML.

[13]  Marc'Aurelio Ranzato,et al.  Large Scale Distributed Deep Networks , 2012, NIPS.

[14]  Yoshua Bengio,et al.  Convergence Properties of the K-Means Algorithms , 1994, NIPS.

[15]  Anil K. Jain Data clustering: 50 years beyond K-means , 2010, Pattern Recognit. Lett..

[16]  S. P. Lloyd,et al.  Least squares quantization in PCM , 1982, IEEE Trans. Inf. Theory.

[17]  Nicholas J. Wright,et al.  Accelerating Applications at Scale Using One-Sided Communication , 2012 .

[18]  Frédéric Jurie,et al.  Sampling Strategies for Bag-of-Features Image Classification , 2006, ECCV.

[19]  Alexander J. Smola,et al.  Parallelized Stochastic Gradient Descent , 2010, NIPS.

[20]  Rui Machado,et al.  Unbalanced tree search on a manycore system using the GPI programming model , 2011, Computer Science - Research and Development.