Investigating Trade-offs in Utility, Fairness and Differential Privacy in Neural Networks

To enable an ethical and legal use of machine learning algorithms, they must both be fair and protect the privacy of those whose data are being used. However, implementing privacy and fairness constraints might come at the cost of utility (Jayaraman & Evans, 2019; Gong et al., 2020). This paper investigates the privacy-utilityfairness trade-off in neural networks by comparing a Simple (S-NN), a Fair (F-NN), a Differentially Private (DP-NN), and a Differentially Private and Fair Neural Network (DPF-NN) to evaluate differences in performance on metrics for privacy (ǫ, δ), fairness (risk difference), and utility (accuracy). In the scenario with the highest considered privacy guarantees (ǫ = 0.1, δ = 0.00001), the DPF-NN was found to achieve better risk difference than all the other neural networks with only a marginally lower accuracy than the S-NN and DP-NN. This model is considered fair as it achieved a risk difference below the strict (0.05) and lenient (0.1) thresholds. However, while the accuracy of the proposed model improved on previous work from Xu, Yuan and Wu (2019), the risk difference was found to be worse.

[1]  Rachel K. E. Bellamy,et al.  AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias , 2018, ArXiv.

[2]  Pat Langley,et al.  Crafting Papers on Machine Learning , 2000, ICML.

[3]  Thomas Steinke,et al.  Differential Privacy: A Primer for a Non-Technical Audience , 2018 .

[4]  Jaakko Hollmén,et al.  Mitigating Discrimination in Clinical Machine Learning Decision Support Using Algorithmic Processing Techniques , 2020, DS.

[5]  Maoguo Gong,et al.  Preserving differential privacy in deep neural networks with relevance-based adaptive noise imposition , 2020, Neural Networks.

[6]  Philip S. Yu,et al.  More Than Privacy: Applying Differential Privacy in Key Areas of Artificial Intelligence , 2020, IEEE Transactions on Knowledge and Data Engineering.

[7]  David Evans,et al.  Evaluating Differentially Private Machine Learning in Practice , 2019, USENIX Security Symposium.

[8]  Suresh Venkatasubramanian,et al.  A comparative study of fairness-enhancing interventions in machine learning , 2018, FAT.

[9]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[10]  Josep Domingo-Ferrer,et al.  Discrimination- and privacy-aware patterns , 2014, Data Mining and Knowledge Discovery.

[11]  Somesh Jha,et al.  Overfitting, robustness, and malicious algorithms: A study of potential causes of privacy risk in machine learning , 2020, J. Comput. Secur..

[12]  H. Brendan McMahan,et al.  A General Approach to Adding Differential Privacy to Iterative Training Procedures , 2018, ArXiv.

[13]  Mohamed Ali Kaafar,et al.  Not one but many Tradeoffs: Privacy Vs. Utility in Differentially Private Machine Learning , 2020, CCSW@CCS.

[14]  Sharad Goel,et al.  The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning , 2018, ArXiv.

[15]  Xintao Wu,et al.  Achieving Differential Privacy and Fairness in Logistic Regression , 2019, WWW.

[16]  Aaron Roth,et al.  Differentially Private Fair Learning , 2018, ICML.

[17]  Hansol Lee,et al.  Evaluation of Fairness Trade-offs in Predicting Student Success , 2020, ArXiv.

[18]  Michael D. Ekstrand,et al.  Privacy for All: Ensuring Fair and Equitable Privacy Protections , 2018, FAT.

[19]  Lidia Arroyo Prieto Acm , 2020, Encyclopedia of Cryptography and Security.

[20]  Varun Gupta,et al.  On the Compatibility of Privacy and Fairness , 2019, UMAP.

[21]  Christian Haas,et al.  Fairness in Machine Learning: A Survey , 2020, ACM Comput. Surv..

[22]  Miao Pan,et al.  Differentially Private and Fair Classification via Calibrated Functional Mechanism , 2020, AAAI.