Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models
暂无分享,去创建一个
[1] Alexander M. Rush,et al. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model , 2022, ArXiv.
[2] R. Shokri,et al. Data Privacy and Trustworthy Machine Learning , 2022, IEEE Security & Privacy.
[3] G. Chrysostomou. Explainable Natural Language Processing , 2022, Computational Linguistics.
[4] Ivan Vulic,et al. Square One Bias in NLP: Towards a Multi-Dimensional Exploration of the Research Manifold , 2022, FINDINGS.
[5] Hao Zhou,et al. Enhancing Cross-lingual Transfer by Manifold Mixup , 2022, ICLR.
[6] Victor Petrén Bach Hansen,et al. The Impact of Differential Privacy on Group Disparity Mitigation , 2022, PRIVATENLP.
[7] Samuel R. Bowman,et al. One size does not fit all: Investigating strategies for differentially-private learning across NLP tasks , 2022 .
[8] Anders Sogaard,et al. Revisiting Methods for Finding Influential Examples , 2021, ArXiv.
[9] Huseyin A. Inan,et al. Differentially Private Fine-tuning of Language Models , 2021, ICLR.
[10] Tatsunori B. Hashimoto,et al. Large Language Models Can Be Strong Differentially Private Learners , 2021, ICLR.
[11] Nicolas Papernot,et al. Hyperparameter Tuning with Renyi Differential Privacy , 2021, ICLR.
[12] Graham Cormode,et al. Opacus: User-Friendly Differential Privacy Library in PyTorch , 2021, ArXiv.
[13] Samuel R. Bowman,et al. Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers , 2021, BLACKBOXNLP.
[14] Hinrich Schutze,et al. Wine is Not v i n. - On the Compatibility of Tokenizations Across Languages , 2021, EMNLP.
[15] Anders Sogaard,et al. The Impact of Positional Encodings on Multilingual Compression , 2021, EMNLP.
[16] Alexander M. Rush,et al. Datasets: A Community Library for Natural Language Processing , 2021, EMNLP.
[17] Ivan Habernal,et al. When differential privacy meets NLP: The devil is in the detail , 2021, EMNLP.
[18] Carsten Eickhoff,et al. IsoScore: Measuring the Uniformity of Embedding Space Utilization , 2021, FINDINGS.
[19] Fatemehsadat Mireshghallah,et al. When Differential Privacy Meets Interpretability: A Case Study , 2021, ArXiv.
[20] Ziming Huang,et al. On Sample Based Explanation Methods for NLP: Faithfulness, Efficiency and Semantic Evaluation , 2021, ACL.
[21] Lidong Bing,et al. On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation , 2021, ACL.
[22] Mohammad Taher Pilehvar,et al. A Cluster-based Approach for Improving Isotropy in Contextual Embedding Space , 2021, ACL.
[23] Arianna Bisazza,et al. Using Confidential Data for Domain Adaptation of Neural Machine Translation , 2021, PRIVATENLP.
[24] Kamalika Chaudhuri,et al. Understanding Instance-based Interpretability of Variational Auto-Encoders , 2021, NeurIPS.
[25] Monojit Choudhury,et al. How Linguistically Fair Are Multilingual Pre-Trained Language Models? , 2021, AAAI.
[26] Serena Booth,et al. Do Feature Attribution Methods Correctly Attribute Features? , 2021, AAAI.
[27] Giacomo Spigler,et al. Investigating Trade-offs in Utility, Fairness and Differential Privacy in Neural Networks , 2021, ArXiv.
[28] Benjamin Muller,et al. First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT , 2021, EACL.
[29] Khaled Shaalan,et al. Self-Training Pre-Trained Language Models for Zero- and Few-Shot Multi-Dialectal Arabic Sequence Labeling , 2021, EACL.
[30] Bo Liu,et al. When Machine Learning Meets Privacy , 2020, ACM Comput. Surv..
[31] Michael A. Lepori,et al. Picking BERT’s Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis , 2020, COLING.
[32] Dan Boneh,et al. Differentially Private Learning Needs Better Features (or Much More Data) , 2020, ICLR.
[33] R. Shokri,et al. On the Privacy Risks of Algorithmic Fairness , 2020, 2021 IEEE European Symposium on Security and Privacy (EuroS&P).
[34] Goran Glavaš,et al. From Zero to Hero: On the Limitations of Zero-Shot Language Transfer with Multilingual Transformers , 2020, EMNLP.
[35] Hinrich Schütze,et al. Identifying Elements Essential for BERT’s Multilinguality , 2020, EMNLP.
[36] Lingjuan Lyu,et al. Differentially Private Representation for NLP: Formal Guarantee and An Empirical Study on Privacy and Fairness , 2020, FINDINGS.
[37] Ekaterina Shutova,et al. What does it mean to be language-agnostic? Probing multilingual sentence encoders for typological properties , 2020, ArXiv.
[38] Dylan Slack,et al. Differentially Private Language Models Benefit from Public Pre-training , 2020, PRIVATENLP.
[39] N. Arun,et al. Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging , 2020, medRxiv.
[40] Yulia Tsvetkov,et al. Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions , 2020, ACL.
[41] Bill Yuchen Lin,et al. IsoBN: Fine-Tuning BERT with Isotropic Batch Normalization , 2020, AAAI.
[42] Jing Huang,et al. Improving Neural Language Generation with Spectrum Control , 2020, ICLR.
[43] Sampo Pyysalo,et al. Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection , 2020, LREC.
[44] Iryna Gurevych,et al. Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation , 2020, EMNLP.
[45] Malvina Nissim,et al. What’s so special about BERT’s layers? A closer look at the NLP pipeline in monolingual and multilingual models , 2020, FINDINGS.
[46] Alexander M. Fraser,et al. On the Language Neutrality of Pre-trained Multilingual Representations , 2020, FINDINGS.
[47] Thomas Steinke,et al. The Discrete Gaussian for Differential Privacy , 2020, NeurIPS.
[48] 知秀 柴田. 5分で分かる!? 有名論文ナナメ読み:Jacob Devlin et al. : BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding , 2020 .
[49] Frederick Liu,et al. Estimating Training Data Influence by Tracking Gradient Descent , 2020, NeurIPS.
[50] Dan Roth,et al. Cross-Lingual Ability of Multilingual BERT: An Empirical Study , 2019, ICLR.
[51] Guillaume Charpiat,et al. Input Similarity from the Neural Network Perspective , 2019, NeurIPS.
[52] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[53] Myle Ott,et al. Unsupervised Cross-lingual Representation Learning at Scale , 2019, ACL.
[54] Luke Zettlemoyer,et al. Emerging Cross-lingual Structure in Pretrained Language Models , 2019, ACL.
[55] S. Feizi,et al. Second-Order Group Influence Functions for Black-Box Predictions , 2019, ArXiv.
[56] Richard Socher,et al. BERT is Not an Interlingua and the Bias of Tokenization , 2019, EMNLP.
[57] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[58] Kawin Ethayarajh,et al. How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings , 2019, EMNLP.
[59] Li Zhang,et al. Rényi Differential Privacy of the Sampled Gaussian Mechanism , 2019, ArXiv.
[60] Kristina Lerman,et al. A Survey on Bias and Fairness in Machine Learning , 2019, ACM Comput. Surv..
[61] Jordan Rodu,et al. Getting in Shape: Word Embedding SubSpaces , 2019, IJCAI.
[62] Di He,et al. Representation Degeneration Problem in Training Natural Language Generation Models , 2019, ICLR.
[63] Holger Schwenk,et al. WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia , 2019, EACL.
[64] Reza Shokri,et al. On the Privacy Risks of Model Explanations , 2019, AIES.
[65] Varun Gupta,et al. On the Compatibility of Privacy and Fairness , 2019, UMAP.
[66] Eva Schlinger,et al. How Multilingual is Multilingual BERT? , 2019, ACL.
[67] Vitaly Shmatikov,et al. Differential Privacy Has Disparate Impact on Model Accuracy , 2019, NeurIPS.
[68] Vitalii Zhelezniak,et al. Correlation Coefficients and Semantic Textual Similarity , 2019, NAACL.
[69] Afra Alishahi,et al. Correlating Neural and Symbolic Representations of Language , 2019, ACL.
[70] Geoffrey E. Hinton,et al. Similarity of Neural Network Representations Revisited , 2019, ICML.
[71] Percy Liang,et al. On the Accuracy of Influence Functions for Measuring Group Effects , 2019, NeurIPS.
[72] Mark Dredze,et al. Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT , 2019, EMNLP.
[73] R. C. Williamson,et al. Fairness risk measures , 2019, ICML.
[74] Holger Schwenk,et al. Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond , 2018, Transactions of the Association for Computational Linguistics.
[75] Grzegorz Chrupala,et al. Symbolic Inductive Bias for Visually Grounded Learning of Spoken Language , 2018, ACL.
[76] Aaron Roth,et al. Differentially Private Fair Learning , 2018, ICML.
[77] Pradeep Ravikumar,et al. Representer Point Selection for Explaining Deep Neural Networks , 2018, NeurIPS.
[78] Kunal Talwar,et al. Private selection from private candidates , 2018, STOC.
[79] Wei Ding,et al. Tight Analysis of Privacy and Utility Tradeoff in Approximate Differential Privacy , 2018, AISTATS.
[80] Guillaume Lample,et al. XNLI: Evaluating Cross-lingual Sentence Representations , 2018, EMNLP.
[81] Marco Baroni,et al. How agents see things: On visual representations in an emergent language game , 2018, EMNLP.
[82] Julia Rubin,et al. Fairness Definitions Explained , 2018, 2018 IEEE/ACM International Workshop on Software Fairness (FairWare).
[83] Samuel R. Bowman,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[84] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[85] Dumitru Erhan,et al. The (Un)reliability of saliency methods , 2017, Explainable AI.
[86] H. Brendan McMahan,et al. Learning Differentially Private Recurrent Language Models , 2017, ICLR.
[87] Thomas Miconi,et al. The impossibility of "fairness": a generalized impossibility result for decisions , 2017, 1707.01195.
[88] M. Kearns,et al. Fairness in Criminal Justice Risk Assessments: The State of the Art , 2017, Sociological Methods & Research.
[89] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[90] Ilya Mironov,et al. Rényi Differential Privacy , 2017, 2017 IEEE 30th Computer Security Foundations Symposium (CSF).
[91] Jörn Diedrichsen,et al. Representational models: A common framework for understanding encoding, pattern-component, and representational-similarity analysis , 2017, bioRxiv.
[92] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[93] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[94] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[95] Aaron Roth,et al. The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..
[96] Ninghui Li,et al. On sampling, anonymization, and differential privacy or, k-anonymization meets differential privacy , 2011, ASIACCS '12.
[97] Nikolaus Kriegeskorte,et al. Frontiers in Systems Neuroscience Systems Neuroscience , 2022 .
[98] Cynthia Dwork,et al. Differential Privacy , 2006, ICALP.
[99] S Edelman,et al. Representation is representation of similarities , 1996, Behavioral and Brain Sciences.
[100] Christopher M. Bishop,et al. Current address: Microsoft Research, , 2022 .
[101] Michele Banko,et al. Practical Transformer-based Multilingual Text Classification , 2021, NAACL.
[102] Goran Glavas,et al. Is Supervised Syntactic Parsing Beneficial for Language Understanding Tasks? An Empirical Investigation , 2021, EACL.
[103] Proceedings of the First Workshop on Trustworthy Natural Language Processing , 2021 .
[104] Genta Indra Winata,et al. Preserving Cross-Linguality of Pre-trained Models via Continual Learning , 2021, REPL4NLP.