When Differential Privacy Meets Interpretability: A Case Study

Given the increase in the use of personal data for training Deep Neural Networks (DNNs) in tasks such as medical imaging and diagnosis, differentially private training of DNNs is surging in importance and there is a large body of work focusing on providing better privacy-utility trade-off. However, little attention is given to the interpretability of these models, and how the application of DP affects the quality of interpretations. We propose an extensive study into the effects of DP training on DNNs, especially on medical imaging applications, on the APTOS dataset.

[1]  Marzyeh Ghassemi,et al.  Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings , 2020, FAccT.

[2]  Harshvardhan Sikka,et al.  Benchmarking Differentially Private Residual Networks for Medical Imagery , 2020, ArXiv.

[3]  Neel Patel,et al.  Model Explanations with Differential Privacy , 2020, ArXiv.

[4]  Anand D. Sarwate,et al.  Differentially Private Empirical Risk Minimization , 2009, J. Mach. Learn. Res..

[5]  Bjorn Bebensee,et al.  Local Differential Privacy: a tutorial , 2019, ArXiv.

[6]  Vineeth N. Balasubramanian,et al.  Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks , 2017, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).

[7]  Cong Li,et al.  Image Captioning with Attribute Refinement , 2019, 2019 IEEE International Conference on Image Processing (ICIP).

[8]  Ramprasaath R. Selvaraju,et al.  Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization , 2016 .

[9]  Bolei Zhou,et al.  Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Rakshit Naidu,et al.  FedPandemic: A Cross-Device Federated Learning Approach Towards Elementary Prognosis of Diseases During a Pandemic , 2021, ArXiv.

[11]  Yu-Xiang Wang,et al.  Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising , 2018, ICML.

[12]  Paramartha Dutta,et al.  Advancements in Image Classification using Convolutional Neural Network , 2018, 2018 Fourth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN).

[13]  Ian Goodfellow,et al.  Deep Learning with Differential Privacy , 2016, CCS.

[14]  Daniel Rueckert,et al.  End-to-end privacy preserving deep learning on multi-institutional medical imaging , 2021, Nature Machine Intelligence.

[15]  Dan Boneh,et al.  Differentially Private Learning Needs Better Features (or Much More Data) , 2020, ICLR.

[16]  Zhidong Deng,et al.  Recent progress in semantic image segmentation , 2018, Artificial Intelligence Review.

[17]  Aaron Roth,et al.  The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..

[18]  Georgios Kaissis,et al.  U-Noise: Learnable Noise Masks for Interpretable Image Segmentation , 2021, ArXiv.

[19]  R. Raskar,et al.  Privacy in Deep Learning: A Survey , 2020, ArXiv.