暂无分享,去创建一个
Daphne Ippolito | Nicholas Carlini | Florian Tramer | Matthew Jagielski | Florian Tramèr | Nicholas Carlini | Chiyuan Zhang | Katherine Lee | Katherine Lee | Katherine Lee | Daphne Ippolito | Matthew Jagielski | Chiyuan Zhang
[1] Jonathan Ullman,et al. Auditing Differentially Private Machine Learning: How Private is Private SGD? , 2020, NeurIPS.
[2] Andrei Z. Broder,et al. On the resemblance and containment of documents , 1997, Proceedings. Compression and Complexity of SEQUENCES 1997 (Cat. No.97TB100171).
[3] Andreas Terzis,et al. Membership Inference Attacks From First Principles , 2021, ArXiv.
[4] Carl A. Gunter,et al. A Pragmatic Approach to Membership Inferences on Machine Learning Models , 2020, 2020 IEEE European Symposium on Security and Privacy (EuroS&P).
[5] Sayan Mukherjee,et al. Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization , 2006, Adv. Comput. Math..
[6] Aaron Roth,et al. The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..
[7] Colin Raffel,et al. Extracting Training Data from Large Language Models , 2020, USENIX Security Symposium.
[8] Cordelia Schmid,et al. White-box vs Black-box: Bayes Optimal Strategies for Membership Inference , 2019, ICML.
[9] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[10] F. Hampel. The Influence Curve and Its Role in Robust Estimation , 1974 .
[11] Michael S. Bernstein,et al. On the Opportunities and Risks of Foundation Models , 2021, ArXiv.
[12] Milad Nasr,et al. Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning , 2021, 2021 IEEE Symposium on Security and Privacy (SP).
[13] L. Squire. Memory and the hippocampus: a synthesis from findings with rats, monkeys, and humans. , 1992, Psychological review.
[14] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[15] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[16] Ali Farhadi,et al. Defending Against Neural Fake News , 2019, NeurIPS.
[17] Marina Weber,et al. Elements Of Episodic Memory , 2016 .
[18] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[19] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[20] Percy Liang,et al. On the Accuracy of Influence Functions for Measuring Group Effects , 2019, NeurIPS.
[21] Peter Henderson,et al. Ethical Challenges in Data-Driven Dialogue Systems , 2017, AIES.
[22] Santiago Zanella Béguelin,et al. Analyzing Information Leakage of Updates to Natural Language Models , 2019, CCS.
[23] L. Squire,et al. Preserved learning and retention of pattern-analyzing skill in amnesia: dissociation of knowing how and knowing that. , 1980, Science.
[24] Douglas Eck,et al. Deduplicating Training Data Makes Language Models Better , 2021, ArXiv.
[25] Úlfar Erlingsson,et al. The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks , 2018, USENIX Security Symposium.
[26] Maarten Sap,et al. Documenting the English Colossal Clean Crawled Corpus , 2021, ArXiv.
[27] Samyadeep Basu,et al. Influence Functions in Deep Learning Are Fragile , 2020, ICLR.
[28] Zihang Dai,et al. Wiki-40B: Multilingual Language Model Dataset , 2020, LREC.
[29] Vitaly Feldman,et al. What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation , 2020, NeurIPS.
[30] Swaroop Ramaswamy,et al. Understanding Unintended Memorization in Federated Learning , 2020, ArXiv.
[31] Dietrich Klakow,et al. Investigating the Impact of Pre-trained Word Embeddings on Memorization in Neural Networks , 2020, TDS.
[32] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[33] Vitaly Feldman,et al. Does learning require memorization? a short tale about a long tail , 2019, STOC.
[34] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[35] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[36] Frederick Liu,et al. Estimating Training Data Influence by Tracking Gradient Descent , 2020, NeurIPS.