Measuring Forgetting of Memorized Training Examples
暂无分享,去创建一个
Florian Tramèr | Nicolas Papernot | Nicholas Carlini | Abhradeep Thakurta | Katherine Lee | Daphne Ippolito | Om Thakkar | Matthew Jagielski | Shuang Song | Eric Wallace | Chiyuan Zhang | Chiyuan Zhang
[1] Luke Zettlemoyer,et al. Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models , 2022, 2205.10770.
[2] Jonathan Ullman,et al. How to Combine Membership-Inference Attacks on Multiple Updated Models , 2022, ArXiv.
[3] Xi Victoria Lin,et al. OPT: Open Pre-trained Transformer Language Models , 2022, ArXiv.
[4] Rajiv Mathews,et al. Detecting Unintended Memorization in Language-Model-Fused ASR , 2022, INTERSPEECH.
[5] E. Amid,et al. Extracting Targeted Training Data from ASR Models, and How to Mitigate It , 2022, INTERSPEECH.
[6] Graham Cormode,et al. Optimal Membership Inference Bounds for Adaptive Composition of Sampled Gaussian Mechanisms , 2022, ArXiv.
[7] Andrew M. Dai,et al. PaLM: Scaling Language Modeling with Pathways , 2022, J. Mach. Learn. Res..
[8] Florian Tramèr,et al. Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets , 2022, CCS.
[9] Lisa Anne Hendricks,et al. Training Compute-Optimal Large Language Models , 2022, ArXiv.
[10] Nicolas Papernot,et al. Bounding Membership Inference , 2022, arXiv.org.
[11] Alexandre Sablayrolles,et al. Defending against Reconstruction Attacks with Rényi Differential Privacy , 2022, ArXiv.
[12] Quantifying Memorization Across Neural Language Models , 2022, ArXiv.
[13] Deduplicating Training Data Mitigates Privacy Risks in Language Models , 2022, 2202.06539.
[14] Florian Tramèr,et al. Membership Inference Attacks From First Principles , 2021, 2022 IEEE Symposium on Security and Privacy (SP).
[15] Graham Cormode,et al. On the Importance of Difficulty Calibration in Membership Inference Attacks , 2021, ICLR.
[16] Nicholas Carlini,et al. Deduplicating Training Data Makes Language Models Better , 2021, ACL.
[17] Chung-Cheng Chiu,et al. w2v-BERT: Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-Training , 2021, 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
[18] Quoc V. Le,et al. CoAtNet: Marrying Convolution and Attention for All Data Sizes , 2021, NeurIPS.
[19] Florian Tramer,et al. Antipodes of Label Differential Privacy: PATE and ALIBI , 2021, NeurIPS.
[20] Ananda Theertha Suresh,et al. Remember What You Want to Forget: Algorithms for Machine Unlearning , 2021, NeurIPS.
[21] Emily M. Bender,et al. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 , 2021, FAccT.
[22] Ilya Sutskever,et al. Learning Transferable Visual Models From Natural Language Supervision , 2021, ICML.
[23] Ryan A. Rossi,et al. Machine Unlearning via Algorithmic Stability , 2021, COLT.
[24] Alan Yuille,et al. Understanding Catastrophic Forgetting and Remembering in Continual Learning with Optimal Relevance Mapping , 2021, ArXiv.
[25] Badih Ghazi,et al. Deep Learning with Label Differential Privacy , 2021, NeurIPS.
[26] Milad Nasr,et al. Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning , 2021, 2021 IEEE Symposium on Security and Privacy (SP).
[27] Stefano Soatto,et al. Mixed-Privacy Forgetting in Deep Networks , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Colin Raffel,et al. Extracting Training Data from Large Language Models , 2020, USENIX Security Symposium.
[29] Vijay Ganesh,et al. Amnesiac Machine Learning , 2020, AAAI.
[30] Seth Neel,et al. Descent-to-Delete: Gradient-Based Methods for Machine Unlearning , 2020, ALT.
[31] David Lie,et al. Machine Unlearning , 2019, 2021 IEEE Symposium on Security and Privacy (SP).
[32] Carmela Troncoso,et al. Disparate Vulnerability to Membership Inference Attacks , 2019, Proc. Priv. Enhancing Technol..
[33] Kamalika Chaudhuri,et al. Approximate Data Deletion from Machine Learning Models: Algorithms and Evaluations , 2020, AISTATS.
[34] Patrick Jaillet,et al. Variational Bayesian Unlearning , 2020, NeurIPS.
[35] Carl A. Gunter,et al. A Pragmatic Approach to Membership Inferences on Machine Learning Models , 2020, 2020 IEEE European Symposium on Security and Privacy (EuroS&P).
[36] Jonathan Ullman,et al. Auditing Differentially Private Machine Learning: How Private is Private SGD? , 2020, NeurIPS.
[37] Swaroop Ramaswamy,et al. Understanding Unintended Memorization in Federated Learning , 2020, ArXiv.
[38] Yu Zhang,et al. Conformer: Convolution-augmented Transformer for Speech Recognition , 2020, INTERSPEECH.
[39] Santiago Zanella Béguelin,et al. Analyzing Information Leakage of Updates to Natural Language Models , 2019, CCS.
[40] Stefano Soatto,et al. Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[41] L. V. D. Maaten,et al. Certified Data Removal from Machine Learning Models , 2019, ICML.
[42] M. Mozer,et al. Sequential Mastery of Multiple Visual Tasks: Networks Naturally Learn to Learn and Forget to Forget , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[43] Yang Zhang,et al. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning , 2019, USENIX Security Symposium.
[44] Stephanie L. Hyland,et al. An Empirical Study on the Intrinsic Privacy of SGD , 2019, 1912.02919.
[45] Dawn Xiaodong Song,et al. Lifelong Anomaly Detection Through Unlearning , 2019, CCS.
[46] James Zou,et al. Making AI Forget You: Data Deletion in Machine Learning , 2019, NeurIPS.
[47] Aran Komatsuzaki,et al. One Epoch Is All You Need , 2019, ArXiv.
[48] Thomas Steinke,et al. Average-Case Averages: Private Algorithms for Smooth Sensitivity and Mean Estimation , 2019, NeurIPS.
[49] Cordelia Schmid,et al. White-box vs Black-box: Bayes Optimal Strategies for Membership Inference , 2019, ICML.
[50] Dawn Song,et al. Towards Practical Differentially Private Convex Optimization , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[51] David Evans,et al. Evaluating Differentially Private Machine Learning in Practice , 2019, USENIX Security Symposium.
[52] Quoc V. Le,et al. Do Better ImageNet Models Transfer Better? , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[53] Jerry Li,et al. Privately Learning High-Dimensional Distributions , 2018, COLT.
[54] Úlfar Erlingsson,et al. The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks , 2018, USENIX Security Symposium.
[55] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[56] Vitaly Feldman,et al. Privacy Amplification by Iteration , 2018, 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS).
[57] Kaiming He,et al. Exploring the Limits of Weakly Supervised Pretraining , 2018, ECCV.
[58] Úlfar Erlingsson,et al. Scalable Private Learning with PATE , 2018, ICLR.
[59] Reza Shokri,et al. Machine Learning with Membership Privacy using Adversarial Regularization , 2018, CCS.
[60] Somesh Jha,et al. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting , 2017, 2018 IEEE 31st Computer Security Foundations Symposium (CSF).
[61] Ronald Kemker,et al. Measuring Catastrophic Forgetting in Neural Networks , 2017, AAAI.
[62] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[63] Ilya Mironov,et al. Rényi Differential Privacy , 2017, 2017 IEEE 30th Computer Security Foundations Symposium (CSF).
[64] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[65] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[66] Martín Abadi,et al. Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data , 2016, ICLR.
[67] Jeffrey F. Naughton,et al. Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics , 2016, SIGMOD Conference.
[68] Pramod Viswanath,et al. The Composition Theorem for Differential Privacy , 2013, IEEE Transactions on Information Theory.
[69] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[70] Li Zhang,et al. Nearly Optimal Private LASSO , 2015, NIPS.
[71] Thomas Steinke,et al. Robust Traceability from Trace Amounts , 2015, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science.
[72] Junfeng Yang,et al. Towards Making Systems Forget with Machine Unlearning , 2015, 2015 IEEE Symposium on Security and Privacy.
[73] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[74] Giovanni Felici,et al. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers , 2013, Int. J. Secur. Networks.
[75] Somesh Jha,et al. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing , 2014, USENIX Security Symposium.
[76] Anand D. Sarwate,et al. Differentially Private Empirical Risk Minimization , 2009, J. Mach. Learn. Res..
[77] Michael I. Jordan,et al. Genomic privacy and limits of individual detection in a pool , 2009, Nature Genetics.
[78] S. Nelson,et al. Resolving Individuals Contributing Trace Amounts of DNA to Highly Complex Mixtures Using High-Density SNP Genotyping Microarrays , 2008, PLoS genetics.
[79] R. French. Catastrophic forgetting in connectionist networks , 1999, Trends in Cognitive Sciences.
[80] Michael McCloskey,et al. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem , 1989 .