暂无分享,去创建一个
Andrew Chi-Chih Yao | Jing Xu | Sen Wang | Liwei Wang | A. Yao | Liwei Wang | Jing Xu | Sen Wang
[1] Venkatesh Saligrama,et al. Federated Learning Based on Dynamic Regularization , 2021, ICLR.
[2] Martin Jaggi,et al. Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning. , 2020, 2008.03606.
[3] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] O. Koyejo,et al. Local AdaAlter: Communication-Efficient Stochastic Gradient Descent with Adaptive Learning Rates , 2019, ArXiv.
[5] Lawrence Carin,et al. Faster On-Device Training Using New Federated Momentum Algorithm , 2020, ArXiv.
[6] Yue Zhao,et al. Federated Learning with Non-IID Data , 2018, ArXiv.
[7] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[8] Tzu-Ming Harry Hsu,et al. Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification , 2019, ArXiv.
[9] Anit Kumar Sahu,et al. FedDANE: A Federated Newton-Type Method , 2019, 2019 53rd Asilomar Conference on Signals, Systems, and Computers.
[10] Yasaman Khazaeni,et al. Bayesian Nonparametric Federated Learning of Neural Networks , 2019, ICML.
[11] Nguyen H. Tran,et al. Personalized Federated Learning with Moreau Envelopes , 2020, NeurIPS.
[12] Anit Kumar Sahu,et al. Federated Learning: Challenges, Methods, and Future Directions , 2019, IEEE Signal Processing Magazine.
[13] Marc'Aurelio Ranzato,et al. Large Scale Distributed Deep Networks , 2012, NIPS.
[14] Tengyu Ma,et al. Federated Accelerated Stochastic Gradient Descent , 2020, NeurIPS.
[15] Alexander J. Smola,et al. Parallelized Stochastic Gradient Descent , 2010, NIPS.
[16] Sebastian U. Stich,et al. Local SGD Converges Fast and Communicates Little , 2018, ICLR.
[17] Richard Nock,et al. Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..
[18] Qinghua Liu,et al. Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization , 2020, NeurIPS.
[19] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[20] Jinbo Bi,et al. Effective Federated Adaptive Gradient Methods with Non-IID Decentralized Data , 2020, ArXiv.
[21] Kaiming He,et al. Group Normalization , 2018, ECCV.
[22] Jianyu Wang,et al. SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum , 2020, ICLR.
[23] Phillip B. Gibbons,et al. The Non-IID Data Quagmire of Decentralized Machine Learning , 2019, ICML.
[24] Stephen P. Boyd,et al. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..
[25] Li Chen,et al. Accelerating Federated Learning via Momentum Gradient Descent , 2019, IEEE Transactions on Parallel and Distributed Systems.
[26] Sébastien Bubeck,et al. Convex Optimization: Algorithms and Complexity , 2014, Found. Trends Mach. Learn..
[27] Sreeram Kannan,et al. Improving Federated Learning Personalization via Model Agnostic Meta Learning , 2019, ArXiv.
[28] Martin J. Wainwright,et al. FedSplit: An algorithmic framework for fast federated optimization , 2020, NeurIPS.
[29] Manzil Zaheer,et al. Adaptive Federated Optimization , 2020, ICLR.
[30] Martin Jaggi,et al. Quasi-Global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data , 2021, ICML.
[31] Enhong Chen,et al. Variance Reduced Local SGD with Lower Communication Complexity , 2019, ArXiv.
[32] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[33] Ohad Shamir,et al. Communication-Efficient Distributed Optimization using an Approximate Newton-type Method , 2013, ICML.
[34] Xiang Li,et al. On the Convergence of FedAvg on Non-IID Data , 2019, ICLR.
[35] Sashank J. Reddi,et al. SCAFFOLD: Stochastic Controlled Averaging for Federated Learning , 2019, ICML.
[36] Wotao Yin,et al. FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data , 2020, ArXiv.
[37] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.
[38] Peter Richtárik,et al. Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.
[39] Filip Hanzely,et al. Lower Bounds and Optimal Algorithms for Personalized Federated Learning , 2020, NeurIPS.
[40] Anit Kumar Sahu,et al. Federated Optimization in Heterogeneous Networks , 2018, MLSys.