TailorFL: Dual-Personalized Federated Learning under System and Data Heterogeneity
暂无分享,去创建一个
Yang Liu | Yongheng Deng | F. Lyu | Yaoxue Zhang | Ju Ren | Yunxin Liu | Weining Chen
[1] Hongli Xu,et al. Adaptive Asynchronous Federated Learning in Resource-Constrained Edge Computing , 2021, IEEE Transactions on Mobile Computing.
[2] Huaqing Wu,et al. AUCTION: Automated and Quality-Aware Client Selection Framework for Efficient Federated Learning , 2022, IEEE Transactions on Parallel and Distributed Systems.
[3] Leandros Tassiulas,et al. Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling , 2021, IEEE INFOCOM 2022 - IEEE Conference on Computer Communications.
[4] Carlee Joe-Wong,et al. FedSoft: Soft Clustered Federated Learning with Proximal Local Updating , 2021, AAAI.
[5] K. Ramchandran,et al. An Efficient Framework for Clustered Federated Learning , 2020, IEEE Transactions on Information Theory.
[6] Leandros Tassiulas,et al. Model Pruning Enables Efficient Federated Learning on Edge Devices , 2019, IEEE transactions on neural networks and learning systems.
[7] Jer Shyuan Ng,et al. Federated Learning Over Wireless Edge Networks , 2022, Wireless Networks.
[8] Jinjun Xiong,et al. Helios: Heterogeneity-Aware Federated Learning with Dynamically Balanced Collaboration , 2021, 2021 58th ACM/IEEE Design Automation Conference (DAC).
[9] Hai Li,et al. FedMask: Joint Computation and Communication-Efficient Personalized Federated Learning via Heterogeneous Masking , 2021, SenSys.
[10] G. Xing,et al. FedDL: Federated Learning via Dynamic Layer Sharing for Human Activity Recognition , 2021, SenSys.
[11] H. Li,et al. Hermes: an efficient federated learning framework for heterogeneous mobile clients , 2021, MobiCom.
[12] Yae Jee Cho,et al. Personalized Federated Learning for Heterogeneous Clients with Clustered Knowledge Transfer , 2021, ArXiv.
[13] Weisheng Zhao,et al. FedSkel: Efficient Federated Learning on Heterogeneous Systems with Skeleton Gradients Update , 2021, CIKM.
[14] Carole-Jean Wu,et al. AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning , 2021, MICRO.
[15] Guoliang Xing,et al. ClusterFL: a similarity-aware federated learning system for human activity recognition , 2021, MobiSys.
[16] Yunxin Liu,et al. nn-Meter: towards accurate latency prediction of deep-learning model inference on diverse edge devices , 2021, MobiSys.
[17] Jiayu Zhou,et al. Data-Free Knowledge Distillation for Heterogeneous Federated Learning , 2021, ICML.
[18] Saeed Vahidian,et al. Personalized Federated Learning by Structured and Unstructured Pruning under Data Heterogeneity , 2021, 2021 IEEE 41st International Conference on Distributed Computing Systems Workshops (ICDCSW).
[19] Marco Canini,et al. Towards Mitigating Device Heterogeneity in Federated Learning via Adaptive Model Quantization , 2021, EuroMLSys@EuroSys.
[20] Jang-Won Lee,et al. Adaptive Transmission Scheduling in Wireless Networks for Asynchronous Federated Learning , 2021, IEEE Journal on Selected Areas in Communications.
[21] Nicholas D. Lane,et al. FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout , 2021, NeurIPS.
[22] Virginia Smith,et al. Ditto: Fair and Robust Federated Learning Through Personalization , 2020, ICML.
[23] Nageen Himayat,et al. Coded Computing for Low-Latency Federated Learning Over Wireless Edge Networks , 2020, IEEE Journal on Selected Areas in Communications.
[24] M. Chowdhury,et al. Oort: Efficient Federated Learning via Guided Participant Selection , 2020, OSDI.
[25] Jie Ding,et al. HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients , 2020, ICLR.
[26] Yuanyuan Yang,et al. Towards Efficient Scheduling of Federated Mobile Devices Under Computational and Statistical Heterogeneity , 2020, IEEE Transactions on Parallel and Distributed Systems.
[27] Richard Nock,et al. Advances and Open Problems in Federated Learning , 2019, Found. Trends Mach. Learn..
[28] Stephen A. Jarvis,et al. SAFA: A Semi-Asynchronous Protocol for Fast Federated Learning With Low Overhead , 2019, IEEE Transactions on Computers.
[29] Guanding Yu,et al. Accelerating DNN Training in Wireless Federated Edge Learning Systems , 2019, IEEE Journal on Selected Areas in Communications.
[30] Hongli Xu,et al. FedSA: A Semi-Asynchronous Federated Learning Mechanism in Heterogeneous Edge Computing , 2021, IEEE Journal on Selected Areas in Communications.
[31] Carole-Jean Wu,et al. AutoScale: Energy Efficiency Optimization for Stochastic Edge Inference Using Reinforcement Learning , 2020, 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO).
[32] Jingwei Sun,et al. LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets , 2020, ArXiv.
[33] Vladimir Braverman,et al. FetchSGD: Communication-Efficient Federated Learning with Sketching , 2022 .
[34] Qinghua Liu,et al. Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization , 2020, NeurIPS.
[35] Hao Wang,et al. Optimizing Federated Learning on Non-IID Data with Reinforcement Learning , 2020, IEEE INFOCOM 2020 - IEEE Conference on Computer Communications.
[36] Sebastian U. Stich,et al. Ensemble Distillation for Robust Model Fusion in Federated Learning , 2020, NeurIPS.
[37] Nguyen H. Tran,et al. Personalized Federated Learning with Moreau Envelopes , 2020, NeurIPS.
[38] Yuan Xie,et al. Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey , 2020, Proceedings of the IEEE.
[39] Milind Kulkarni,et al. Survey of Personalization Techniques for Federated Learning , 2020, 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4).
[40] Peter Richtárik,et al. Federated Learning of a Mixture of Global and Local Models , 2020, ArXiv.
[41] Ruslan Salakhutdinov,et al. Think Locally, Act Globally: Federated Learning with Local and Global Representations , 2020, ArXiv.
[42] Yanzhi Wang,et al. PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with Pattern-based Weight Pruning , 2020, ASPLOS.
[43] Sashank J. Reddi,et al. SCAFFOLD: Stochastic Controlled Averaging for Federated Learning , 2019, ICML.
[44] Hang Su,et al. Pruning from Scratch , 2019, AAAI.
[45] Wei Yang Bryan Lim,et al. Federated Learning in Mobile Edge Networks: A Comprehensive Survey , 2019, IEEE Communications Surveys & Tutorials.
[46] Anit Kumar Sahu,et al. Federated Optimization in Heterogeneous Networks , 2018, MLSys.
[47] Aryan Mokhtari,et al. Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach , 2020, NeurIPS.
[48] Ilai Bistritz,et al. Distributed Distillation for On-Device Learning , 2020, NeurIPS.
[49] Jinjun Xiong,et al. ELFISH: Resource-Aware Federated Learning on Heterogeneous Edge Devices , 2019, ArXiv.
[50] Sunav Choudhary,et al. Federated Learning with Personalization Layers , 2019, ArXiv.
[51] Junpu Wang,et al. FedMD: Heterogenous Federated Learning via Model Distillation , 2019, ArXiv.
[52] Jakub Konecný,et al. Improving Federated Learning Personalization via Model Agnostic Meta Learning , 2019, ArXiv.
[53] Tzu-Ming Harry Hsu,et al. Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification , 2019, ArXiv.
[54] Pavlo Molchanov,et al. Importance Estimation for Neural Network Pruning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[55] Xu Chen,et al. Edge Intelligence: Paving the Last Mile of Artificial Intelligence With Edge Computing , 2019, Proceedings of the IEEE.
[56] Indranil Gupta,et al. Asynchronous Federated Optimization , 2019, ArXiv.
[57] Mattan Erez,et al. PruneTrain: fast neural network training by dynamic sparse model reconfiguration , 2019, SC.
[58] Jiayu Li,et al. ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using Alternating Direction Methods of Multipliers , 2018, ASPLOS.
[59] Xuanzhe Liu,et al. A First Look at Deep Learning Apps on Smartphones , 2018, WWW.
[60] Jae-Joon Han,et al. Learning to Quantize Deep Networks by Optimizing Quantization Intervals With Task Loss , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[61] Takayuki Nishio,et al. Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge , 2018, ICC 2019 - 2019 IEEE International Conference on Communications (ICC).
[62] Kin K. Leung,et al. Adaptive Federated Learning in Resource Constrained Edge Computing Systems , 2018, IEEE Journal on Selected Areas in Communications.
[63] Michael Carbin,et al. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.
[64] Xiao Zeng,et al. NestDNN: Resource-Aware Multi-Tenant On-Device Deep Learning for Continuous Mobile Vision , 2018, MobiCom.
[65] Amos J. Storkey,et al. School of Informatics, University of Edinburgh , 2022 .
[66] Yue Zhao,et al. Federated Learning with Non-IID Data , 2018, ArXiv.
[67] Dan Alistarh,et al. Model compression via distillation and quantization , 2018, ICLR.
[68] Mark Sandler,et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[69] Bo Chen,et al. Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[70] Ji Liu,et al. Gradient Sparsification for Communication-Efficient Distributed Optimization , 2017, NeurIPS.
[71] Zhiqiang Shen,et al. Learning Efficient Convolutional Networks through Network Slimming , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[72] Ameet Talwalkar,et al. Federated Multi-Task Learning , 2017, NIPS.
[73] Dan Alistarh,et al. QSGD: Communication-Optimal Stochastic Gradient Descent, with Applications to Training Neural Networks , 2016, 1610.02132.
[74] Hanan Samet,et al. Pruning Filters for Efficient ConvNets , 2016, ICLR.
[75] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[76] Peter Richtárik,et al. Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.
[77] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[78] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[79] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[80] Johannes Stallkamp,et al. Detection of traffic signs in real-world images: The German traffic sign detection benchmark , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).
[81] Davide Anguita,et al. A Public Domain Dataset for Human Activity Recognition using Smartphones , 2013, ESANN.
[82] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .