Fair Mixup: Fairness via Interpolation

Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predictions between the groups. Nevertheless, even though the constraints are satisfied during training, they might not generalize at evaluation time. To improve the generalizability of fair classifiers, we propose fair mixup, a new data augmentation strategy for imposing the fairness constraint. In particular, we show that fairness can be achieved by regularizing the models on paths of interpolated samples between the groups. We use mixup, a powerful data augmentation strategy to generate these interpolates. We analyze fair mixup and empirically show that it ensures a better generalization for both accuracy and fairness measurement in tabular, vision, and language benchmarks. The code is available at https://github.com/chingyaoc/fair-mixup.

[1]  Ioannis Mitliagkas,et al.  Manifold Mixup: Better Representations by Interpolating Hidden States , 2018, ICML.

[2]  Jie Fu,et al.  Jacobian Adversarially Regularized Networks for Robustness , 2020, ICLR.

[3]  Maya R. Gupta,et al.  Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints , 2018, ICML.

[4]  Krishna P. Gummadi,et al.  Fairness Constraints: Mechanisms for Fair Classification , 2015, AISTATS.

[5]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[6]  Hongyi Zhang,et al.  mixup: Beyond Empirical Risk Minimization , 2017, ICLR.

[7]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[8]  Max Welling,et al.  The Variational Fair Autoencoder , 2015, ICLR.

[9]  David Lopez-Paz,et al.  Optimizing the Latent Space of Generative Networks , 2017, ICML.

[10]  John Langford,et al.  A Reductions Approach to Fair Classification , 2018, ICML.

[11]  M. Kearns,et al.  Fairness in Criminal Justice Risk Assessments: The State of the Art , 2017, Sociological Methods & Research.

[12]  Viet Anh Nguyen,et al.  A Distributionally Robust Approach to Fair Classification , 2020, ArXiv.

[13]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[14]  Blake Lemoine,et al.  Mitigating Unwanted Biases with Adversarial Learning , 2018, AIES.

[15]  知秀 柴田 5分で分かる!? 有名論文ナナメ読み:Jacob Devlin et al. : BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding , 2020 .

[16]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[17]  Toniann Pitassi,et al.  Learning Adversarially Fair and Transferable Representations , 2018, ICML.

[18]  Samy Bengio,et al.  Understanding deep learning requires rethinking generalization , 2016, ICLR.

[19]  Judy Hoffman,et al.  Robust Learning with Jacobian Regularization , 2019, ArXiv.

[20]  David Berthelot,et al.  Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer , 2018, ICLR.

[21]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  Harris Drucker,et al.  Improving generalization performance using double backpropagation , 1992, IEEE Trans. Neural Networks.

[23]  Peter König,et al.  Data augmentation instead of explicit regularization , 2018, ArXiv.

[24]  Yann Brenier,et al.  A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem , 2000, Numerische Mathematik.

[25]  Quan Do,et al.  Jigsaw Unintended Bias in Toxicity Classification , 2019 .

[26]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[27]  Karthikeyan Natesan Ramamurthy,et al.  Optimized Score Transformation for Fair Classification , 2019, AISTATS.

[28]  Ching-Yao Chuang,et al.  Estimating Generalization under Distribution Shifts via Domain-Invariant Representations , 2020, ICML.