Ditto: Fair and Robust Federated Learning Through Personalization
暂无分享,去创建一个
[1] Leslie Lamport,et al. The Byzantine Generals Problem , 1982, TOPL.
[2] Yu Hen Hu,et al. Vehicle classification in distributed sensor networks , 2004, J. Parallel Distributed Comput..
[3] Massimiliano Pontil,et al. Regularized multi--task learning , 2004, KDD.
[4] Blaine Nelson,et al. Support Vector Machines Under Adversarial Label Noise , 2011, ACML.
[5] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[6] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[7] Ameet Talwalkar,et al. Federated Multi-Task Learning , 2017, NIPS.
[8] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[9] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[10] Gregory Cohen,et al. EMNIST: an extension of MNIST to handwritten letters , 2017, CVPR 2017.
[11] Rachid Guerraoui,et al. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.
[12] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[13] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[14] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[15] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[16] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[17] Percy Liang,et al. Fairness Without Demographics in Repeated Loss Minimization , 2018, ICML.
[18] Zhenguo Li,et al. Federated Meta-Learning with Fast Convergence and Efficient Communication , 2018, 1802.07876.
[19] Kannan Ramchandran,et al. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.
[20] Yue Zhao,et al. Federated Learning with Non-IID Data , 2018, ArXiv.
[21] Sebastian Caldas,et al. LEAF: A Benchmark for Federated Settings , 2018, ArXiv.
[22] Yee Whye Teh,et al. Progress & Compress: A scalable framework for continual learning , 2018, ICML.
[23] Behrouz Touri,et al. Global Games With Noisy Information Sharing , 2015, IEEE Transactions on Signal and Information Processing over Networks.
[24] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[25] Mehryar Mohri,et al. Agnostic Federated Learning , 2019, ICML.
[26] Hubert Eichner,et al. Federated Evaluation of On-device Personalization , 2019, ArXiv.
[27] Prateek Mittal,et al. Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.
[28] Maria-Florina Balcan,et al. Adaptive Gradient-Based Meta-Learning Methods , 2019, NeurIPS.
[29] Qing Ling,et al. RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets , 2018, AAAI.
[30] Sreeram Kannan,et al. Improving Federated Learning Personalization via Model Agnostic Meta Learning , 2019, ArXiv.
[31] Ananda Theertha Suresh,et al. Can You Really Backdoor Federated Learning? , 2019, ArXiv.
[32] K. Ramchandran,et al. An Efficient Framework for Clustered Federated Learning , 2020, IEEE Transactions on Information Theory.
[33] Jinyuan Jia,et al. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.
[34] Kartik Sreenivasan,et al. Attack of the Tails: Yes, You Really Can Backdoor Federated Learning , 2020, NeurIPS.
[35] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[36] Mehrdad Mahdavi,et al. Distributionally Robust Federated Averaging , 2021, NeurIPS.
[37] Vitaly Shmatikov,et al. Salvaging Federated Learning by Local Adaptation , 2020, ArXiv.
[38] Samet Oymak,et al. Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks , 2019, AISTATS.
[39] Filip Hanzely,et al. Lower Bounds and Optimal Algorithms for Personalized Federated Learning , 2020, NeurIPS.
[40] Y. Mansour,et al. Three Approaches for Personalization with Applications to Federated Learning , 2020, ArXiv.
[41] Mehrdad Mahdavi,et al. Adaptive Personalized Federated Learning , 2020, ArXiv.
[42] Bo Li,et al. DBA: Distributed Backdoor Attacks against Federated Learning , 2020, ICLR.
[43] Lingjuan Lyu,et al. Towards Building a Robust and Fair Federated Learning System , 2020, ArXiv.
[44] Ehsan Kazemi,et al. On Adversarial Bias and the Robustness of Fair Machine Learning , 2020, ArXiv.
[45] Lawrence Carin,et al. WAFFLe: Weight Anonymized Factorization for Federated Learning , 2020, IEEE Access.
[46] Peter Richtárik,et al. Federated Learning of a Mixture of Global and Local Models , 2020, ArXiv.
[47] Ben London. PAC Identifiability in Federated Personalization , 2020 .
[48] Martin Jaggi,et al. Byzantine-Robust Learning on Heterogeneous Datasets via Resampling , 2020, ArXiv.
[49] Nguyen H. Tran,et al. Personalized Federated Learning with Moreau Envelopes , 2020, NeurIPS.
[50] Aryan Mokhtari,et al. Personalized Federated Learning: A Meta-Learning Approach , 2020, ArXiv.
[51] Barry Smyth,et al. FedFast: Going Beyond Average for Faster Training of Federated Recommender Systems , 2020, KDD.
[52] Jonas Geiping,et al. MetaPoison: Practical General-purpose Clean-label Data Poisoning , 2020, NeurIPS.
[53] Tian Li,et al. Fair Resource Allocation in Federated Learning , 2019, ICLR.
[54] Sashank J. Reddi,et al. SCAFFOLD: Stochastic Controlled Averaging for Federated Learning , 2019, ICML.
[55] Jianfeng Gao,et al. SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization , 2019, ACL.
[56] Yaoliang Yu,et al. FedMGDA+: Federated Learning meets Multi-objective Optimization , 2020, ArXiv.
[57] Xiang Li,et al. On the Convergence of FedAvg on Non-IID Data , 2019, ICLR.
[58] Ruslan Salakhutdinov,et al. Think Locally, Act Globally: Federated Learning with Local and Global Representations , 2020, ArXiv.
[59] Anit Kumar Sahu,et al. Federated Optimization in Heterogeneous Networks , 2018, MLSys.
[60] Walter J. Scheirer,et al. Backdooring Convolutional Neural Networks via Targeted Weight Perturbations , 2018, 2020 IEEE International Joint Conference on Biometrics (IJCB).
[61] Ameet S. Talwalkar,et al. Differentially Private Meta-Learning , 2019, ICLR.
[62] Richard Nock,et al. Advances and Open Problems in Federated Learning , 2019, Found. Trends Mach. Learn..
[63] Wojciech Samek,et al. Clustered Federated Learning: Model-Agnostic Distributed Multitask Optimization Under Privacy Constraints , 2019, IEEE Transactions on Neural Networks and Learning Systems.
[64] Zachary Garrett,et al. Federated Reconstruction: Partially Local Federated Learning , 2021, NeurIPS.
[65] Virginia Smith,et al. Tilted Empirical Risk Minimization , 2020, ICLR.
[66] Sanja Fidler,et al. Personalized Federated Learning with First Order Model Optimization , 2020, ICLR.
[67] Manzil Zaheer,et al. Adaptive Federated Optimization , 2020, ICLR.
[68] Qiang Wang,et al. Data Poisoning Attacks on Federated Machine Learning , 2020, IEEE Internet of Things Journal.
[69] Zaïd Harchaoui,et al. Robust Aggregation for Federated Learning , 2019, IEEE Transactions on Signal Processing.