Continual Learning with Differential Privacy

In this paper, we focus on preserving differential privacy (DP) in continual learning (CL), in which we train ML models to learn a sequence of new tasks while memorizing previous tasks. We first introduce a notion of continual adjacent databases to bound the sensitivity of any data record participating in the training process of CL. Based upon that, we develop a new DPpreserving algorithm for CL with a data sampling strategy to quantify the privacy risk of training data in the well-known Averaged Gradient Episodic Memory (A-GEM) approach by applying a moments accountant. Our algorithm provides formal guarantees of privacy for data records across tasks in CL. Preliminary theoretical analysis and evaluations show that our mechanism tightens the privacy loss while maintaining a promising model utility.

[1]  Tinne Tuytelaars,et al.  Task-Free Continual Learning , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Yang Liu,et al.  Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness , 2019, IJCAI.

[3]  Marc'Aurelio Ranzato,et al.  Gradient Episodic Memory for Continual Learning , 2017, NIPS.

[4]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.

[5]  Ian Goodfellow,et al.  Deep Learning with Differential Privacy , 2016, CCS.

[6]  Cynthia Dwork,et al.  Differential Privacy: A Survey of Results , 2008, TAMC.

[7]  Ruoming Jin,et al.  Scalable Differential Privacy with Certified Robustness in Adversarial Learning , 2020, ICML.

[8]  Xiaoping Liu,et al.  Differential Privacy for the Vast Majority , 2019, ACM Trans. Manag. Inf. Syst..

[9]  Xiaopeng Hong,et al.  Few-Shot Class-Incremental Learning , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Philip S. Yu,et al.  Differential Privacy and Applications , 2017, Advances in Information Security.

[11]  Nan Wu,et al.  The Value of Collaboration in Convex Machine Learning with Differential Privacy , 2019, 2020 IEEE Symposium on Security and Privacy (SP).

[12]  Vince D. Calhoun,et al.  Deep learning for neuroimaging: a validation study , 2013, Front. Neurosci..

[13]  Philip H. S. Torr,et al.  Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence , 2018, ECCV.

[14]  Bogdan Raducanu,et al.  Memory Replay GANs: Learning to Generate New Categories without Forgetting , 2018, NeurIPS.

[15]  Colin Raffel,et al.  Extracting Training Data from Large Language Models , 2020, USENIX Security Symposium.

[16]  R Ratcliff,et al.  Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. , 1990, Psychological review.

[17]  Marc'Aurelio Ranzato,et al.  Efficient Lifelong Learning with A-GEM , 2018, ICLR.

[18]  Gerald Tesauro,et al.  Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference , 2018, ICLR.

[19]  Nhathai Phan Differentially Private Lifelong Learning , 2019 .

[20]  Ju Ren,et al.  GANobfuscator: Mitigating Information Leakage Under GAN via Differential Privacy , 2019, IEEE Transactions on Information Forensics and Security.

[21]  Fahad Shahbaz Khan,et al.  iTAML: An Incremental Task-Agnostic Meta-learning Approach , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[23]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[24]  Srinivas C. Turaga,et al.  Connectomic reconstruction of the inner plexiform layer in the mouse retina , 2013, Nature.

[25]  Surya Ganguli,et al.  Continual Learning Through Synaptic Intelligence , 2017, ICML.

[26]  Xintao Wu,et al.  Regression Model Fitting under Differential Privacy and Model Inversion Attack , 2015, IJCAI.

[27]  Kunal Talwar,et al.  Mechanism Design via Differential Privacy , 2007, 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07).

[28]  Simone Calderara,et al.  Conditional Channel Gated Networks for Task-Aware Continual Learning , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Ruoming Jin,et al.  Preserving Differential Privacy in Adversarial Learning with Provable Robustness , 2019, ArXiv.

[30]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[31]  Shouling Ji,et al.  Differentially Private Releasing via Deep Generative Model , 2018, ArXiv.

[32]  H. El-Sayed,et al.  Application of Differential Privacy Approach in Healthcare Data – A Case Study , 2020, 2020 14th International Conference on Innovations in Information Technology (IIT).

[33]  Patrick Jähnichen,et al.  Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[34]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[35]  Razvan Pascanu,et al.  Progressive Neural Networks , 2016, ArXiv.

[36]  Rajendra K. Raj,et al.  An Adaptive Differential Privacy Algorithm for Range Queries over Healthcare Data , 2017, 2017 IEEE International Conference on Healthcare Informatics (ICHI).

[37]  Yoshua Bengio,et al.  An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks , 2013, ICLR.

[38]  Cynthia Dwork,et al.  Calibrating Noise to Sensitivity in Private Data Analysis , 2006, TCC.

[39]  Dejing Dou,et al.  Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning , 2017, 2017 IEEE International Conference on Data Mining (ICDM).

[40]  Jiwon Kim,et al.  Continual Learning with Deep Generative Replay , 2017, NIPS.

[41]  Xintao Wu,et al.  Removing Disparate Impact on Model Accuracy in Differentially Private Stochastic Gradient Descent , 2021, KDD.

[42]  Yee Whye Teh,et al.  Progress & Compress: A scalable framework for continual learning , 2018, ICML.

[43]  Yarin Gal,et al.  Differentially Private Continual Learning , 2019, ArXiv.

[44]  Aaron Roth,et al.  The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..