Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach

A critical concern in data-driven decision making is to build models whose outcomes do not discriminate against some demographic groups, including gender, ethnicity, or age. To ensure non-discrimination in learning tasks, knowledge of the sensitive attributes is essential, while, in practice, these attributes may not be available due to legal and ethical requirements. To address this challenge, this paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors. The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints while guaranteeing the privacy of sensitive attributes. The paper analyses the tension between accuracy, privacy, and fairness and the experimental evaluation illustrates the benefits of the proposed model on several prediction tasks.

[1]  Amos J. Storkey,et al.  Censoring Representations with an Adversary , 2015, ICLR.

[2]  Ian Goodfellow,et al.  Deep Learning with Differential Privacy , 2016, CCS.

[3]  Novi Quadrianto,et al.  Recycling Privileged Learning and Distribution Matching for Fairness , 2017, NIPS.

[4]  James Y. Zou,et al.  Multiaccuracy: Black-Box Post-Processing for Fairness in Classification , 2018, AIES.

[5]  Krishna P. Gummadi,et al.  Fairness Constraints: Mechanisms for Fair Classification , 2015, AISTATS.

[6]  Ashwin Machanavajjhala,et al.  Fair decision making using privacy-protected data , 2019, FAT*.

[7]  Nathan Srebro,et al.  Fair Learning with Private Demographic Data , 2020, ICML.

[8]  Anand D. Sarwate,et al.  Differentially Private Empirical Risk Minimization , 2009, J. Mach. Learn. Res..

[9]  M. Hestenes Multiplier and gradient methods , 1969 .

[10]  Toon Calders,et al.  Controlling Attribute Effect in Linear Regression , 2013, 2013 IEEE 13th International Conference on Data Mining.

[11]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[12]  Zhe Zhao,et al.  Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations , 2017, ArXiv.

[13]  Shai Ben-David,et al.  Empirical Risk Minimization under Fairness Constraints , 2018, NeurIPS.

[14]  Pascal Van Hentenryck,et al.  Predicting AC Optimal Power Flows: Combining Deep Learning and Lagrangian Dual Methods , 2020, AAAI.

[15]  Krishna P. Gummadi,et al.  Blind Justice: Fairness with Encrypted Sensitive Attributes , 2018, ICML.

[16]  Cynthia Dwork,et al.  Calibrating Noise to Sensitivity in Private Data Analysis , 2006, TCC.

[17]  Michael Feldman,et al.  Computational Fairness: Preventing Machine-Learned Discrimination , 2015 .

[18]  Michele Lombardi,et al.  Lagrangian Duality for Constrained Deep Learning , 2020, ECML/PKDD.

[19]  John Langford,et al.  A Reductions Approach to Fair Classification , 2018, ICML.

[20]  H. Brendan McMahan,et al.  A General Approach to Adding Differential Privacy to Iterative Training Procedures , 2018, ArXiv.

[21]  Fei Wang,et al.  Differentially Private Generative Adversarial Network , 2018, ArXiv.

[22]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[23]  Martín Abadi,et al.  Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data , 2016, ICLR.

[24]  Kush R. Varshney,et al.  Bias Mitigation Post-processing for Individual and Group Fairness , 2018, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[25]  Luca Oneto,et al.  Taking Advantage of Multitask Learning for Fair Classification , 2018, AIES.

[26]  Sashank J. Reddi,et al.  AdaCliP: Adaptive Clipping for Private SGD , 2019, ArXiv.

[27]  Li Zhang,et al.  Rényi Differential Privacy of the Sampled Gaussian Mechanism , 2019, ArXiv.

[28]  Michael D. Ekstrand,et al.  Privacy for All: Ensuring Fair and Equitable Privacy Protections , 2018, FAT.

[29]  Aaron Roth,et al.  The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..

[30]  Vitaly Shmatikov,et al.  Differential Privacy Has Disparate Impact on Model Accuracy , 2019, NeurIPS.

[31]  Alexandra Chouldechova,et al.  The Frontiers of Fairness in Machine Learning , 2018, ArXiv.

[32]  Michele Lombardi,et al.  A Lagrangian Dual Framework for Deep Neural Networks with Constraints , 2020, ArXiv.

[33]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[34]  Pascal Van Hentenryck,et al.  Constraint-Based Lagrangian Relaxation , 2014, CP.

[35]  Varun Gupta,et al.  On the Compatibility of Privacy and Fairness , 2019, UMAP.

[36]  Xintao Wu,et al.  Achieving Differential Privacy and Fairness in Logistic Regression , 2019, WWW.

[37]  Aaron Roth,et al.  Differentially Private Fair Learning , 2018, ICML.

[38]  Pramod Viswanath,et al.  Extremal Mechanisms for Local Differential Privacy , 2014, J. Mach. Learn. Res..

[39]  David Berthelot,et al.  MixMatch: A Holistic Approach to Semi-Supervised Learning , 2019, NeurIPS.

[40]  Gautam Kamath,et al.  Enabling Fast Differentially Private SGD via Just-in-Time Compilation and Vectorization , 2020, NeurIPS.

[41]  Ilya Mironov,et al.  Rényi Differential Privacy , 2017, 2017 IEEE 30th Computer Security Foundations Symposium (CSF).

[42]  G. Saridis,et al.  Journal of Optimization Theory and Applications Approximate Solutions to the Time-invariant Hamilton-jacobi-bellman Equation 1 , 1998 .

[43]  Nathan Srebro,et al.  Learning Non-Discriminatory Predictors , 2017, COLT.

[44]  Han Zhao,et al.  Conditional Learning of Fair Representations , 2019, ICLR.

[45]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..