Optimize Scheduling of Federated Learning on Battery-powered Mobile Devices
暂无分享,去创建一个
Cong Wang | Pengzhan Zhou | Xin Wei | Pengzhan Zhou | Xin Wei | Cong Wang
[1] Zongpeng Li,et al. Online Job Scheduling in Distributed Machine Learning Clusters , 2018, IEEE INFOCOM 2018 - IEEE Conference on Computer Communications.
[2] Eranda C Ela,et al. Assignment Problems , 1964, Comput. J..
[3] Chuan Wu,et al. Optimus: an efficient dynamic resource scheduler for deep learning clusters , 2018, EuroSys.
[4] Nenghai Yu,et al. Asynchronous Stochastic Gradient Descent with Delay Compensation , 2016, ICML.
[5] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[6] Mehdi Bennis,et al. Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data , 2018, ArXiv.
[7] Sarvar Patel,et al. Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..
[8] Dimitris S. Papailiopoulos,et al. Gradient Diversity: a Key Ingredient for Scalable Distributed Learning , 2017, AISTATS.
[9] Rachid Guerraoui,et al. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.
[10] Wei Zhang,et al. Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent , 2017, NIPS.
[11] Marc'Aurelio Ranzato,et al. Large Scale Distributed Deep Networks , 2012, NIPS.
[12] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[13] Jirí Sgall,et al. Approximation Schemes for Scheduling on Uniformly Related and Identical Parallel Machines , 1999, ESA.
[14] Wei Wang,et al. CMFL: Mitigating Communication Overhead for Federated Learning , 2019, 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS).
[15] Seunghak Lee,et al. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server , 2013, NIPS.
[16] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[17] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[18] Ameet Talwalkar,et al. Federated Multi-Task Learning , 2017, NIPS.
[19] Yaochu Jin,et al. Multi-Objective Evolutionary Federated Learning , 2018, IEEE Transactions on Neural Networks and Learning Systems.
[20] Hubert Eichner,et al. Towards Federated Learning at Scale: System Design , 2019, SysML.
[21] Li Li,et al. Close the Gap between Deep Learning and Mobile Intelligence by Incorporating Training in the Loop , 2019, ACM Multimedia.
[22] Peter Richtárik,et al. Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.
[23] Marc-Antoine Weisser,et al. Bin packing with fragmentable items: Presentation and approximations , 2015, Theor. Comput. Sci..
[24] A. W. Neebe,et al. Bottleneck generalized assignment problems , 1988 .