暂无分享,去创建一个
Wei Zhang | Xiangru Lian | Ji Liu | Ce Zhang | Wei Zhang | Ji Liu | Ce Zhang | Xiangru Lian
[1] Stephen P. Boyd,et al. Gossip algorithms: design, analysis and applications , 2005, Proceedings IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies..
[2] Luca Schenato,et al. A distributed consensus protocol for clock synchronization in wireless sensor network , 2007, 2007 46th IEEE Conference on Decision and Control.
[3] Sandro Zampieri,et al. Randomized consensus algorithms over large scale networks , 2007, 2007 Information Theory and Applications Workshop.
[4] Reza Olfati-Saber,et al. Consensus and Cooperation in Networked Multi-Agent Systems , 2007, Proceedings of the IEEE.
[5] Anand D. Sarwate,et al. Broadcast Gossip Algorithms for Consensus , 2009, IEEE Transactions on Signal Processing.
[6] Xin Yuan,et al. Bandwidth optimal all-reduce algorithms for clusters of workstations , 2009, J. Parallel Distributed Comput..
[7] Asuman E. Ozdaglar,et al. Distributed Subgradient Methods for Multi-Agent Optimization , 2009, IEEE Transactions on Automatic Control.
[8] Alexander Shapiro,et al. Stochastic Approximation approach to Stochastic Programming , 2013 .
[9] Angelia Nedic,et al. Distributed subgradient projection algorithm for convex optimization , 2009, 2009 IEEE International Conference on Acoustics, Speech and Signal Processing.
[10] Ruggero Carli,et al. Gossip consensus algorithms via quantized communication , 2009, Autom..
[11] A. Nedić,et al. Asynchronous Gossip Algorithm for Stochastic Optimization: Constant Stepsize Analysis* , 2010 .
[12] Choon Yik Tang,et al. A gossip algorithm for convex consensus optimization over networks , 2010, Proceedings of the 2010 American Control Conference.
[13] Angelia Nedic,et al. Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization , 2008, J. Optim. Theory Appl..
[14] Stephen J. Wright,et al. Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.
[15] Eric Moulines,et al. Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning , 2011, NIPS.
[16] Angelia Nedic,et al. Distributed Asynchronous Constrained Stochastic Optimization , 2011, IEEE Journal of Selected Topics in Signal Processing.
[17] John C. Duchi,et al. Distributed delayed stochastic optimization , 2011, 2012 IEEE 51st IEEE Conference on Decision and Control (CDC).
[18] Ohad Shamir,et al. Optimal Distributed Online Prediction Using Mini-Batches , 2010, J. Mach. Learn. Res..
[19] Stephen J. Wright,et al. An Approximate, Efficient LP Solver for LP Rounding , 2013, NIPS.
[20] Pascal Bianchi,et al. Performance of a Distributed Stochastic Approximation Algorithm , 2012, IEEE Transactions on Information Theory.
[21] Feng Yan,et al. Distributed Autonomous Online Learning: Regrets and Intrinsic Privacy-Preserving Properties , 2010, IEEE Transactions on Knowledge and Data Engineering.
[22] Saeed Ghadimi,et al. Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming , 2013, SIAM J. Optim..
[23] Haim Avron,et al. Revisiting Asynchronous Linear Solvers: Provable Convergence Rate through Randomization , 2014, IPDPS.
[24] James T. Kwok,et al. Asynchronous Distributed ADMM for Consensus Optimization , 2014, ICML.
[25] Thomas Paine,et al. GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training , 2013, ICLR.
[26] Hamid Reza Feyzmahdavian,et al. An asynchronous mini-batch algorithm for regularized stochastic optimization , 2015, 2015 54th IEEE Conference on Decision and Control (CDC).
[27] Stephen J. Wright,et al. An asynchronous parallel stochastic coordinate descent algorithm , 2013, J. Mach. Learn. Res..
[28] Qing Ling,et al. A Proximal Gradient Algorithm for Decentralized Composite Optimization , 2015, IEEE Transactions on Signal Processing.
[29] Lin Xiao,et al. Scaling Up Stochastic Dual Coordinate Ascent , 2015, KDD.
[30] Zheng Zhang,et al. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems , 2015, ArXiv.
[31] Yijun Huang,et al. Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization , 2015, NIPS.
[32] Yann LeCun,et al. Deep learning with Elastic Averaging SGD , 2014, NIPS.
[33] Qing Ling,et al. EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization , 2014, 1404.6264.
[34] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[35] Cho-Jui Hsieh,et al. A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order , 2016, NIPS.
[36] Amit Agarwal,et al. CNTK: Microsoft's Open-Source Deep-Learning Toolkit , 2016, KDD.
[37] Michael G. Rabbat,et al. Efficient Distributed Online Prediction and Stochastic Optimization With Approximate Distributed Averaging , 2014, IEEE Transactions on Signal and Information Processing over Networks.
[38] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[39] Suyog Gupta,et al. Model Accuracy and Runtime Tradeoff in Distributed Deep Learning: A Systematic Study , 2015, 2016 IEEE 16th International Conference on Data Mining (ICDM).
[40] Aryan Mokhtari,et al. DSA: Decentralized Double Stochastic Averaging Gradient Algorithm , 2015, J. Mach. Learn. Res..
[41] Saeed Ghadimi,et al. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization , 2013, Mathematical Programming.
[42] Qing Ling,et al. On the Convergence of Decentralized Gradient Descent , 2013, SIAM J. Optim..
[43] Martín Abadi,et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.
[44] Xiaojing Ye,et al. Consensus optimization with delayed and stochastic gradients on decentralized networks , 2016, 2016 IEEE International Conference on Big Data (Big Data).
[45] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.
[46] S. Gupta,et al. Wildfire: Approximate synchronization of parameters in distributed deep learning , 2017, IBM J. Res. Dev..
[47] Wei Zhang,et al. Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent , 2017, NIPS.
[48] Yandong Wang,et al. GaDei: On Scale-Up Training as a Service for Deep Learning , 2016, 2017 IEEE International Conference on Data Mining (ICDM).
[49] Dimitris S. Papailiopoulos,et al. Perturbed Iterate Analysis for Asynchronous Stochastic Optimization , 2015, SIAM J. Optim..
[50] N. S. Aybat,et al. Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization , 2015, IEEE Transactions on Automatic Control.
[51] Xiangru Lian,et al. D2: Decentralized Training over Decentralized Data , 2018, ICML.
[52] Ali H. Sayed,et al. Decentralized Consensus Optimization With Asynchrony and Delays , 2016, IEEE Transactions on Signal and Information Processing over Networks.
[53] Qing Ling,et al. Decentralized RLS With Data-Adaptive Censoring for Regressions Over Large-Scale Networks , 2016, IEEE Transactions on Signal Processing.
[54] Wei Shi,et al. A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates , 2017, IEEE Transactions on Signal Processing.
[55] Yi Zhou,et al. Communication-efficient algorithms for decentralized and stochastic optimization , 2017, Mathematical Programming.