暂无分享,去创建一个
Xin Liu | Jiahui Hou | Nader Bouacida | Hui Zang | Nader Bouacida | H. Zang | Xin Liu | Jiahui Hou
[1] William J. Dally,et al. Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training , 2017, ICLR.
[2] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[3] Sebastian Caldas,et al. LEAF: A Benchmark for Federated Settings , 2018, ArXiv.
[4] Marc Tommasi,et al. Fully Decentralized Joint Learning of Personalized Models and Collaboration Graphs , 2019, AISTATS.
[5] Qun Li,et al. eSGD: Communication Efficient Distributed Deep Learning on the Edge , 2018, HotEdge.
[6] Samy Bengio,et al. Revisiting Distributed Synchronous SGD , 2016, ArXiv.
[7] Elad Hoffer,et al. Train longer, generalize better: closing the generalization gap in large batch training of neural networks , 2017, NIPS.
[8] Gregory Cohen,et al. EMNIST: an extension of MNIST to handwritten letters , 2017, CVPR 2017.
[9] Peter Richtárik,et al. Federated Optimization: Distributed Machine Learning for On-Device Intelligence , 2016, ArXiv.
[10] Ji Liu,et al. Gradient Sparsification for Communication-Efficient Distributed Optimization , 2017, NeurIPS.
[11] Ahmed M. Abdelmoniem,et al. On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning , 2019, AAAI.
[12] Sanjiv Kumar,et al. cpSGD: Communication-efficient and differentially-private distributed SGD , 2018, NeurIPS.
[13] Roman Vershynin,et al. Uncertainty Principles and Vector Quantization , 2006, IEEE Transactions on Information Theory.
[14] Sebastian U. Stich,et al. Local SGD Converges Fast and Communicates Little , 2018, ICLR.
[15] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[16] Hubert Eichner,et al. Towards Federated Learning at Scale: System Design , 2019, SysML.
[17] Sebastian Caldas,et al. Expanding the Reach of Federated Learning by Reducing Client Resource Requirements , 2018, ArXiv.
[18] Mats Jirstrand,et al. A Performance Evaluation of Federated Learning Algorithms , 2018, DIDL@Middleware.
[19] Tao Lin,et al. Don't Use Large Mini-Batches, Use Local SGD , 2018, ICLR.
[20] Wojciech Zaremba,et al. Recurrent Neural Network Regularization , 2014, ArXiv.
[21] Tianjian Chen,et al. Federated Machine Learning: Concept and Applications , 2019 .
[22] William Shakespeare,et al. Complete Works of William Shakespeare , 1854 .
[23] Takayuki Nishio,et al. Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge , 2018, ICC 2019 - 2019 IEEE International Conference on Communications (ICC).
[24] Richard Nock,et al. Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..
[25] Peter Richtárik,et al. Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.
[26] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[27] Hubert Eichner,et al. Federated Evaluation of On-device Personalization , 2019, ArXiv.
[28] Quoc V. Le,et al. Don't Decay the Learning Rate, Increase the Batch Size , 2017, ICLR.