Federated Multi-Task Learning for Competing Constraints

In addition to accuracy, fairness and robustness are two critical concerns for federated learning systems. In this work, we first identify that robustness to adversarial training-time attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general multi-task learning objective, and analyze the ability of the objective to achieve a favorable tradeoff between fairness and robustness. We develop a scalable solver for the objective and show that multi-task learning can enable more accurate, robust, and fair models relative to state-of-the-art baselines across a suite of federated datasets.

[1]  Fei Chen,et al.  Federated Meta-Learning with Fast Convergence and Efficient Communication , 2018 .

[2]  Yishay Mansour,et al.  Three Approaches for Personalization with Applications to Federated Learning , 2020, ArXiv.

[3]  Dawn Xiaodong Song,et al.  Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.

[4]  Gregory Cohen,et al.  EMNIST: an extension of MNIST to handwritten letters , 2017, CVPR 2017.

[5]  Maria-Florina Balcan,et al.  Adaptive Gradient-Based Meta-Learning Methods , 2019, NeurIPS.

[6]  Yaoliang Yu,et al.  FedMGDA+: Federated Learning meets Multi-objective Optimization , 2020, ArXiv.

[7]  Ameet Talwalkar,et al.  Federated Multi-Task Learning , 2017, NIPS.

[8]  Filip Hanzely,et al.  Federated Learning of a Mixture of Global and Local Models , 2020, ArXiv.

[9]  Blaine Nelson,et al.  Poisoning Attacks against Support Vector Machines , 2012, ICML.

[10]  Razvan Pascanu,et al.  Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.

[11]  Qiang Wang,et al.  Data Poisoning Attacks on Federated Machine Learning , 2020, IEEE Internet of Things Journal.

[12]  Minghong Fang,et al.  Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.

[13]  Percy Liang,et al.  Fairness Without Demographics in Repeated Loss Minimization , 2018, ICML.

[14]  Tudor Dumitras,et al.  Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.

[15]  Wen-Chuan Lee,et al.  Trojaning Attack on Neural Networks , 2018, NDSS.

[16]  Chen-Yu Wei,et al.  Federated Residual Learning , 2020, ArXiv.

[17]  Hubert Eichner,et al.  Federated Evaluation of On-device Personalization , 2019, ArXiv.

[18]  Sebastian Caldas,et al.  LEAF: A Benchmark for Federated Settings , 2018, ArXiv.

[19]  Nguyen H. Tran,et al.  Personalized Federated Learning with Moreau Envelopes , 2020, NeurIPS.

[20]  Antonio Robles-Kelly,et al.  Hierarchically Fair Federated Learning , 2020, ArXiv.

[21]  Mehryar Mohri,et al.  Agnostic Federated Learning , 2019, ICML.

[22]  Behrouz Touri,et al.  Global Games With Noisy Information Sharing , 2015, IEEE Transactions on Signal and Information Processing over Networks.

[23]  Samet Oymak,et al.  Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks , 2019, AISTATS.

[24]  Zaïd Harchaoui,et al.  Robust Aggregation for Federated Learning , 2019, IEEE Transactions on Signal Processing.

[25]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[26]  Kartik Sreenivasan,et al.  Attack of the Tails: Yes, You Really Can Backdoor Federated Learning , 2020, NeurIPS.

[27]  Kannan Ramchandran,et al.  Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.

[28]  Rachid Guerraoui,et al.  Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.

[29]  Ananda Theertha Suresh,et al.  Can You Really Backdoor Federated Learning? , 2019, ArXiv.

[30]  Walter J. Scheirer,et al.  Backdooring Convolutional Neural Networks via Targeted Weight Perturbations , 2018, 2020 IEEE International Joint Conference on Biometrics (IJCB).

[31]  Xiaogang Wang,et al.  Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).

[32]  Brendan Dolan-Gavitt,et al.  BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.

[33]  Tian Li,et al.  Fair Resource Allocation in Federated Learning , 2019, ICLR.

[34]  Mehrdad Mahdavi,et al.  Adaptive Personalized Federated Learning , 2020, ArXiv.

[35]  Jonas Geiping,et al.  MetaPoison: Practical General-purpose Clean-label Data Poisoning , 2020, NeurIPS.

[36]  Bo Li,et al.  DBA: Distributed Backdoor Attacks against Federated Learning , 2020, ICLR.

[37]  Vitaly Shmatikov,et al.  How To Backdoor Federated Learning , 2018, AISTATS.

[38]  Prateek Mittal,et al.  Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.

[39]  Virginia Smith,et al.  Tilted Empirical Risk Minimization , 2020, ICLR.

[40]  Aryan Mokhtari,et al.  Personalized Federated Learning: A Meta-Learning Approach , 2020, ArXiv.

[41]  Ruslan Salakhutdinov,et al.  Think Locally, Act Globally: Federated Learning with Local and Global Representations , 2020, ArXiv.

[42]  Yu Hen Hu,et al.  Vehicle classification in distributed sensor networks , 2004, J. Parallel Distributed Comput..

[43]  Sreeram Kannan,et al.  Improving Federated Learning Personalization via Model Agnostic Meta Learning , 2019, ArXiv.

[44]  Massimiliano Pontil,et al.  Regularized multi--task learning , 2004, KDD.

[45]  Blaine Nelson,et al.  Support Vector Machines Under Adversarial Label Noise , 2011, ACML.

[46]  Vitaly Shmatikov,et al.  Salvaging Federated Learning by Local Adaptation , 2020, ArXiv.