Distributed Model Training Based on Data Parallelism in Edge Computing-Enabled Elastic Optical Networks
暂无分享,去创建一个
Yongli Zhao | Jie Zhang | Boyuan Yan | Yajie Li | Jun Li | Zebin Zeng | Jie Zhang | Yongli Zhao | Boyuan Yan | Yajie Li | Jun Li | Zebin Zeng
[1] Yongli Zhao,et al. Adaptive DNN Model Partition and Deployment in Edge Computing-Enabled Metro Optical Interconnection Network , 2020, 2020 Optical Fiber Communications Conference and Exhibition (OFC).
[2] Yongli Zhao,et al. Joint balancing of IT and spectrum resources for selecting virtualized network function in inter-datacenter elastic optical networks. , 2019, Optics express.
[3] Xu Chen,et al. Edge Intelligence: Paving the Last Mile of Artificial Intelligence With Edge Computing , 2019, Proceedings of the IEEE.
[4] Albert Y. Zomaya,et al. Federated Learning over Wireless Networks: Optimization Model Design and Analysis , 2019, IEEE INFOCOM 2019 - IEEE Conference on Computer Communications.
[5] Tianjian Chen,et al. Federated Machine Learning: Concept and Applications , 2019 .
[6] Mianxiong Dong,et al. Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing , 2018, IEEE Network.
[7] William J. Dally,et al. Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training , 2017, ICLR.
[8] Peter Richtárik,et al. Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.
[9] Seunghak Lee,et al. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server , 2013, NIPS.