Machine Learning From Distributed, Streaming Data [From the Guest Editors]

The articles in this special section focus on machine learning from distributed, streaming media. The field of machine learning has undergone radical transformations during the last decade. These transformations, which have been fueled by our ability to collect and generate tremendous volumes of training data and leverage massive amounts of lowcost computing power, have led to an explosion in research activity in the field by academic and industrial researchers. Discusses the fields that are adopting machine learning and reports on applications for their use.

[1]  Alexander J. Smola,et al.  Scaling Distributed Machine Learning with the Parameter Server , 2014, OSDI.

[2]  John N. Tsitsiklis,et al.  Distributed asynchronous deterministic and stochastic gradient optimization algorithms , 1986 .

[3]  John Langford,et al.  Scaling up machine learning: parallel and distributed approaches , 2011, KDD '11 Tutorials.

[4]  Shai Shalev-Shwartz,et al.  Online Learning and Online Convex Optimization , 2012, Found. Trends Mach. Learn..

[5]  Hamed Haddadi,et al.  Deep Learning in Mobile and Wireless Networking: A Survey , 2018, IEEE Communications Surveys & Tutorials.

[6]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..

[7]  H. Robbins A Stochastic Approximation Method , 1951 .

[8]  Michael G. Rabbat,et al.  Consensus-based distributed optimization: Practical issues and applications in large-scale machine learning , 2012, 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[9]  H. Kushner,et al.  Stochastic Approximation and Recursive Algorithms and Applications , 2003 .

[10]  Asuman E. Ozdaglar,et al.  Distributed Subgradient Methods for Multi-Agent Optimization , 2009, IEEE Transactions on Automatic Control.

[11]  Léon Bottou,et al.  On-line learning and stochastic approximations , 1999 .

[12]  Stephen J. Wright,et al.  Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.