暂无分享,去创建一个
Pramod K. Varshney | Prashant Khanduri | Pranay Sharma | Saikiran Bulusu | Pranay Sharma | Prashant Khanduri | Swatantra Kafle | Saikiran Bulusu | Pramod K. Varshney
[1] Ohad Shamir,et al. Optimal Distributed Online Prediction Using Mini-Batches , 2010, J. Mach. Learn. Res..
[2] Lili Su,et al. Securing Distributed Machine Learning in High Dimensions , 2018, ArXiv.
[3] Kannan Ramchandran,et al. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.
[4] Indranil Gupta,et al. Zeno: Distributed Stochastic Gradient Descent with Suspicion-based Fault-tolerance , 2018, ICML.
[5] Alexander J. Smola,et al. Parallelized Stochastic Gradient Descent , 2010, NIPS.
[6] Michael I. Jordan,et al. Non-convex Finite-Sum Optimization Via SCSG Methods , 2017, NIPS.
[7] Kannan Ramchandran,et al. Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning , 2018, ICML.
[8] Rachid Guerraoui,et al. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.
[9] Michael I. Jordan,et al. Less than a Single Pass: Stochastically Controlled Stochastic Gradient , 2016, AISTATS.
[10] Lili Su,et al. Securing Distributed Gradient Descent in High Dimensional Statistical Learning , 2018, Proc. ACM Meas. Anal. Comput. Syst..
[11] Stephen J. Wright,et al. Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.
[12] Peng Jiang,et al. A Linear Speedup Analysis of Distributed Deep Learning with Sparse and Quantized Communication , 2018, NeurIPS.
[13] Rong Jin,et al. On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization , 2019, ICML.
[14] Qing Ling,et al. RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets , 2018, AAAI.
[15] Alexander J. Smola,et al. Stochastic Variance Reduction for Nonconvex Optimization , 2016, ICML.
[16] Dan Alistarh,et al. Byzantine Stochastic Gradient Descent , 2018, NeurIPS.
[17] Indranil Gupta,et al. Phocas: dimensional Byzantine-resilient stochastic gradient descent , 2018, ArXiv.
[18] Seunghak Lee,et al. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server , 2013, NIPS.