Unified Group Fairness on Federated Learning

Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on the private data from distributed clients. However, most of existing FL algorithms cannot guarantee the performance fairness towards different clients or different groups of samples because of the distribution shift. Recent researches focus on achieving fairness among clients, but they ignore the fairness towards different groups formed by sensitive attribute(s) (e.g., gender and/or race), which is important and practical in real applications. To bridge this gap, we formulate the goal of unified group fairness on FL which is to learn a fair global model with similar performance on different groups. To achieve the unified group fairness for arbitrary sensitive attribute(s), we propose a novel FL algorithm, named Group Distributionally Robust Federated Averaging (G-DRFA), which mitigates the distribution shift across groups with theoretical analysis of convergence rate. Specifically, we treat the performance of the federated global model at each group as an objective and employ the distributionally robust techniques to maximize the performance of the worstperforming group over an uncertainty set by group reweighting. We validate the advantages of the G-DRFA algorithm with various kinds of distribution shift settings in experiments, and the results show that G-DRFA algorithm outperforms the existing fair federated learning algorithms on unified group fairness.

[1]  Percy Liang,et al.  Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization , 2019, ArXiv.

[2]  Tianjian Chen,et al.  Federated Machine Learning: Concept and Applications , 2019 .

[3]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[4]  YeYinyu,et al.  Distributionally Robust Optimization Under Moment Uncertainty with Application to Data-Driven Problems , 2010 .

[5]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[6]  Mehryar Mohri,et al.  Communication-Efficient Agnostic Federated Averaging , 2021, Interspeech 2021.

[7]  Krishna P. Gummadi,et al.  Fairness Constraints: Mechanisms for Fair Classification , 2015, AISTATS.

[8]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[9]  Liang Lin,et al.  Deep Cocktail Network: Multi-source Unsupervised Domain Adaptation with Category Shift , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[10]  John C. Duchi,et al.  Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.

[11]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[12]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[13]  Alexandra Chouldechova,et al.  A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions , 2018, FAT.

[14]  Ananda Theertha Suresh,et al.  Distributed Mean Estimation with Limited Communication , 2016, ICML.

[15]  Shengli Xie,et al.  Incentive Mechanism for Reliable Federated Learning: A Joint Optimization Approach to Combining Reputation and Contract Theory , 2019, IEEE Internet of Things Journal.

[16]  John C. Duchi,et al.  Learning Models with Uniform Performance via Distributionally Robust Optimization , 2018, ArXiv.

[17]  Maya R. Gupta,et al.  Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals , 2018, J. Mach. Learn. Res..

[18]  Hanghang Tong,et al.  Fairness-aware Agnostic Federated Learning , 2020, SDM.

[19]  Mehryar Mohri,et al.  Agnostic Federated Learning , 2019, ICML.

[20]  Rui Zhang,et al.  A Hybrid Approach to Privacy-Preserving Federated Learning , 2018, Informatik Spektrum.

[21]  Yuekai Sun,et al.  Two Simple Ways to Learn Individual Fairness Metrics from Data , 2020, ICML.

[22]  Nathan Srebro,et al.  Learning Non-Discriminatory Predictors , 2017, COLT.

[23]  Yue Zhao,et al.  Federated Learning with Non-IID Data , 2018, ArXiv.

[24]  Percy Liang,et al.  Distributionally Robust Language Modeling , 2019, EMNLP.

[25]  John C. Duchi,et al.  Stochastic Gradient Methods for Distributionally Robust Optimization with f-divergences , 2016, NIPS.

[26]  Tian Li,et al.  Fair Resource Allocation in Federated Learning , 2019, ICLR.

[27]  Richard Nock,et al.  Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..

[28]  Mehrdad Mahdavi,et al.  Distributionally Robust Federated Averaging , 2021, NeurIPS.

[29]  Bo Wang,et al.  Moment Matching for Multi-Source Domain Adaptation , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[30]  Sanjiv Kumar,et al.  Federated Learning with Only Positive Labels , 2020, ICML.

[31]  Peter Richtárik,et al.  Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.

[32]  Gang Niu,et al.  Does Distributionally Robust Supervised Learning Give Robust Classifiers? , 2016, ICML.

[33]  Adam Tauman Kalai,et al.  Decoupled Classifiers for Group-Fair and Efficient Machine Learning , 2017, FAT.

[34]  Deze Zeng,et al.  A Learning-Based Incentive Mechanism for Federated Learning , 2020, IEEE Internet of Things Journal.

[35]  Cheng Wang,et al.  Federated Learning with Fair Averaging , 2021, IJCAI.

[36]  Peng Cui,et al.  Stable Adversarial Learning under Distributional Shifts , 2020, AAAI.

[37]  Kristina Lerman,et al.  A Survey on Bias and Fairness in Machine Learning , 2019, ACM Comput. Surv..

[38]  Krishna P. Gummadi,et al.  Equity of Attention: Amortizing Individual Fairness in Rankings , 2018, SIGIR.

[39]  Kurt Keutzer,et al.  Multi-source Distilling Domain Adaptation , 2020, AAAI.

[40]  Daniel Kuhn,et al.  Distributionally Robust Convex Optimization , 2014, Oper. Res..

[41]  Yishay Mansour,et al.  Beyond Individual and Group Fairness , 2020, ArXiv.

[42]  Daniel Kuhn,et al.  Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations , 2015, Mathematical Programming.

[43]  Roland Vollgraf,et al.  Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.

[44]  Reuben Binns,et al.  On the apparent conflict between individual and group fairness , 2019, FAT*.

[45]  J. Rawls,et al.  Justice as Fairness: A Restatement , 2001 .

[46]  Aaron Roth,et al.  Average Individual Fairness: Algorithms, Generalization and Experiments , 2019, NeurIPS.

[47]  John C. Duchi,et al.  Variance-based Regularization with Convex Objectives , 2016, NIPS.

[48]  Andrew D. Selbst,et al.  Big Data's Disparate Impact , 2016 .

[49]  Peter Richtárik,et al.  Federated Optimization: Distributed Machine Learning for On-Device Intelligence , 2016, ArXiv.