Counterfactual Memorization in Neural Language Models

Modern neural language models widely used in tasks across NLP risk memorizing sensitive information from their training data. As models continue to scale up in parameters, training data, and compute, understanding memorization in language models is both important from a learning-theoretical point of view, and is practically crucial in real world applications. An open question in previous studies of memorization in language models is how to filter out “common” memorization. In fact, most memorization criteria strongly correlate with the number of occurrences in the training set, capturing “common” memorization such as familiar phrases, public knowledge or templated texts. In this paper, we provide a principled perspective inspired by a taxonomy of human memory in Psychology. From this perspective, we formulate a notion of counterfactual memorization, which characterizes how a model’s predictions change if a particular document is omitted during training. We identify and study counterfactuallymemorized training examples in standard text datasets. We further estimate the influence of each training example on the validation set and on generated texts, and show that this can provide direct evidence of the source of memorization at test time.

[1]  Jonathan Ullman,et al.  Auditing Differentially Private Machine Learning: How Private is Private SGD? , 2020, NeurIPS.

[2]  Andrei Z. Broder,et al.  On the resemblance and containment of documents , 1997, Proceedings. Compression and Complexity of SEQUENCES 1997 (Cat. No.97TB100171).

[3]  Andreas Terzis,et al.  Membership Inference Attacks From First Principles , 2021, ArXiv.

[4]  Carl A. Gunter,et al.  A Pragmatic Approach to Membership Inferences on Machine Learning Models , 2020, 2020 IEEE European Symposium on Security and Privacy (EuroS&P).

[5]  Sayan Mukherjee,et al.  Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization , 2006, Adv. Comput. Math..

[6]  Aaron Roth,et al.  The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..

[7]  Colin Raffel,et al.  Extracting Training Data from Large Language Models , 2020, USENIX Security Symposium.

[8]  Cordelia Schmid,et al.  White-box vs Black-box: Bayes Optimal Strategies for Membership Inference , 2019, ICML.

[9]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[10]  F. Hampel The Influence Curve and Its Role in Robust Estimation , 1974 .

[11]  Michael S. Bernstein,et al.  On the Opportunities and Risks of Foundation Models , 2021, ArXiv.

[12]  Milad Nasr,et al.  Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning , 2021, 2021 IEEE Symposium on Security and Privacy (SP).

[13]  L. Squire Memory and the hippocampus: a synthesis from findings with rats, monkeys, and humans. , 1992, Psychological review.

[14]  Ilya Sutskever,et al.  Language Models are Unsupervised Multitask Learners , 2019 .

[15]  Percy Liang,et al.  Understanding Black-box Predictions via Influence Functions , 2017, ICML.

[16]  Ali Farhadi,et al.  Defending Against Neural Fake News , 2019, NeurIPS.

[17]  Marina Weber,et al.  Elements Of Episodic Memory , 2016 .

[18]  Colin Raffel,et al.  Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..

[19]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[20]  Percy Liang,et al.  On the Accuracy of Influence Functions for Measuring Group Effects , 2019, NeurIPS.

[21]  Peter Henderson,et al.  Ethical Challenges in Data-Driven Dialogue Systems , 2017, AIES.

[22]  Santiago Zanella Béguelin,et al.  Analyzing Information Leakage of Updates to Natural Language Models , 2019, CCS.

[23]  L. Squire,et al.  Preserved learning and retention of pattern-analyzing skill in amnesia: dissociation of knowing how and knowing that. , 1980, Science.

[24]  Douglas Eck,et al.  Deduplicating Training Data Makes Language Models Better , 2021, ArXiv.

[25]  Úlfar Erlingsson,et al.  The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks , 2018, USENIX Security Symposium.

[26]  Maarten Sap,et al.  Documenting the English Colossal Clean Crawled Corpus , 2021, ArXiv.

[27]  Samyadeep Basu,et al.  Influence Functions in Deep Learning Are Fragile , 2020, ICLR.

[28]  Zihang Dai,et al.  Wiki-40B: Multilingual Language Model Dataset , 2020, LREC.

[29]  Vitaly Feldman,et al.  What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation , 2020, NeurIPS.

[30]  Swaroop Ramaswamy,et al.  Understanding Unintended Memorization in Federated Learning , 2020, ArXiv.

[31]  Dietrich Klakow,et al.  Investigating the Impact of Pre-trained Word Embeddings on Memorization in Neural Networks , 2020, TDS.

[32]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[33]  Vitaly Feldman,et al.  Does learning require memorization? a short tale about a long tail , 2019, STOC.

[34]  Mark Chen,et al.  Language Models are Few-Shot Learners , 2020, NeurIPS.

[35]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[36]  Frederick Liu,et al.  Estimating Training Data Influence by Tracking Gradient Descent , 2020, NeurIPS.