暂无分享,去创建一个
[1] Matteo Negri,et al. Gender in Danger? Evaluating Speech Translation Technology on the MuST-SHE Corpus , 2020, ACL.
[2] Chandler May,et al. On Measuring Social Biases in Sentence Encoders , 2019, NAACL.
[3] Chandler May,et al. Social Bias in Elicited Natural Language Inferences , 2017, EthNLP@EACL.
[4] Yasmeen Hitti,et al. Proposed Taxonomy for Gender Bias in Text; A Filtering Methodology for the Gender Generalization Subtype , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[5] Danushka Bollegala,et al. Gender-preserving Debiasing for Pre-trained Word Embeddings , 2019, ACL.
[6] Anupam Datta,et al. Gender Bias in Neural Natural Language Processing , 2018, Logic, Language, and Security.
[7] Adam Tauman Kalai,et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.
[8] A. Mitra. Establishment size, employment, and the gender wage gap , 2003 .
[9] Lauren Ackerman,et al. Syntactic and cognitive issues in investigating gendered coreference , 2019 .
[10] Ahmed Y. Tawfik,et al. Gender aware spoken language translation applied to English-Arabic , 2018, 2018 2nd International Conference on Natural Language and Speech Processing (ICNLSP).
[11] Graeme Hirst,et al. Understanding Undesirable Word Embedding Associations , 2019, ACL.
[12] Ryan Cotterell,et al. Examining Gender Bias in Languages with Grammatical Gender , 2019, EMNLP.
[13] Elias Benussi,et al. Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models , 2021, NeurIPS.
[14] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[15] Alan W Black,et al. Measuring Bias in Contextualized Word Representations , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[16] Barbara J. Risman,et al. Gender As a Social Structure , 2004 .
[17] Melvin Wevers,et al. Using Word Embeddings to Examine Gender Bias in Dutch Newspapers, 1950-1990 , 2019, LChange@ACL.
[18] Magnus Sahlgren,et al. Gender Bias in Pretrained Swedish Embeddings , 2019, NODALIDA.
[19] Nizar Habash,et al. Automatic Gender Identification and Reinflection in Arabic , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[20] Dan Jurafsky,et al. Content Analysis of Textbooks via Natural Language Processing: Findings on Gender, Race, and Ethnicity in Texas U.S. History Textbooks , 2020, AERA Open.
[21] Noah A. Smith,et al. Evaluating Gender Bias in Machine Translation , 2019, ACL.
[22] Londa Schiebinger,et al. Scientific research must take gender into account , 2014, Nature.
[23] Marco Gaido,et al. Gender Bias in Machine Translation , 2021, Transactions of the Association for Computational Linguistics.
[24] Veselin Stoyanov,et al. Unsupervised Cross-lingual Representation Learning at Scale , 2019, ACL.
[25] Cristian Danescu-Niculescu-Mizil,et al. Tie-breaker: Using language models to quantify gender bias in sports journalism , 2016, ArXiv.
[26] Joao Sedoc,et al. Conceptor Debiasing of Word Representations Evaluated on WEAT , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[27] Eduard H. Hovy,et al. Five sources of bias in natural language processing , 2021, Lang. Linguistics Compass.
[28] Thamar Solorio,et al. Aggression and Misogyny Detection using BERT: A Multi-Task Approach , 2020, TRAC.
[29] Pradyumna Tambwekar,et al. Towards a Comprehensive Understanding and Accurate Evaluation of Societal Biases in Pre-Trained Transformers , 2021, NAACL.
[30] Danushka Bollegala,et al. Debiasing Pre-trained Contextualised Embeddings , 2021, EACL.
[31] Pasquale Minervini,et al. Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models , 2021, EACL.
[32] A. Hood,et al. Gender , 2019, Textile History.
[33] Shrikanth S. Narayanan,et al. A quantitative analysis of gender differences in movies using psycholinguistic normatives , 2015, EMNLP.
[34] João Sedoc,et al. The Role of Protected Class Word Lists in Bias Identification of Contextualized Word Representations , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[35] Li Lucy,et al. Gender and Representation Bias in GPT-3 Generated Stories , 2021, NUSE.
[36] Jesse Vig,et al. A Multiscale Visualization of Attention in the Transformer Model , 2019, ACL.
[37] Yulia Tsvetkov,et al. Contextual Affective Analysis: A Case Study of People Portrayals in Online #MeToo Stories , 2019, ICWSM.
[38] Zeyu Li,et al. Learning Gender-Neutral Word Embeddings , 2018, EMNLP.
[39] Huanqi Cao,et al. CPM: A Large-scale Generative Chinese Pre-trained Language Model , 2020, AI Open.
[40] Mike Thelwall,et al. A Community of Curious Souls: An Analysis of Commenting Behavior on TED Talks Videos , 2014, PloS one.
[41] Kenneth Heafield,et al. Gender bias amplification during Speed-Quality optimization in Neural Machine Translation , 2021, ACL.
[42] Amy Beth Warriner,et al. Norms of valence, arousal, and dominance for 13,915 English lemmas , 2013, Behavior Research Methods.
[43] Benjamin Van Durme,et al. Reporting bias and knowledge acquisition , 2013, AKBC '13.
[44] Saif Mohammad,et al. Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems , 2018, *SEMEVAL.
[45] Helen Nissenbaum,et al. Bias in computer systems , 1996, TOIS.
[46] Ryan Cotterell,et al. It’s All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution , 2019, EMNLP.
[47] Allan Paivio,et al. Extensions of the Paivio, Yuille, and Madigan (1968) norms , 2004, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.
[48] Dana E. Mastro,et al. Mean Girls? The Influence of Gender Portrayals in Teen Movies on Emerging Adults' Gender-Based Attitudes and Beliefs , 2008 .
[49] Marta R. Costa-jussà,et al. Equalizing Gender Bias in Neural Machine Translation with Word Embeddings Techniques , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[50] Shrikanth S. Narayanan,et al. Linguistic analysis of differences in portrayal of movie characters , 2017, ACL.
[51] Cristina Espana-Bonet,et al. GeBioToolkit: Automatic Extraction of Gender-Balanced Multilingual Corpus of Wikipedia Biographies , 2019, LREC.
[52] M. Dumont,et al. Insidious dangers of benevolent sexism: consequences for women's performance. , 2007, Journal of personality and social psychology.
[53] Soujanya Poria,et al. Investigating Gender Bias in BERT , 2020, Cognitive Computation.
[54] J. Butler. Gender Trouble: Feminism and the Subversion of Identity , 1990 .
[55] Davis Liang,et al. Masked Language Model Scoring , 2019, ACL.
[56] Dirk Hovy,et al. HONEST: Measuring Hurtful Sentence Completion in Language Models , 2021, NAACL.
[57] Arvind Narayanan,et al. Semantics derived automatically from language corpora contain human-like biases , 2016, Science.
[58] Saif Mohammad,et al. SemEval-2018 Task 1: Affect in Tweets , 2018, *SEMEVAL.
[59] Peter Henderson,et al. Ethical Challenges in Data-Driven Dialogue Systems , 2017, AIES.
[60] Marc Choueiti,et al. Gender Bias Without Borders An Investigation of Female Characters in Popular Films Across 11 Countries GENDER BIAS WITHOUT BORDERS , 2014 .
[61] William Yang Wang,et al. They, Them, Theirs: Rewriting with Gender-Neutral English , 2021, ArXiv.
[62] Yulia Tsvetkov,et al. Entity-Centric Contextual Affective Analysis , 2019, ACL.
[63] Ryan Cotterell,et al. Unsupervised Discovery of Gendered Language through Latent-Variable Modeling , 2019, ACL.
[64] Alan W Black,et al. Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings , 2019, NAACL.
[65] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[66] S. Lemon,et al. The Ambivalent Sexism Inventory : Differentiating Hostile and Benevolent Sexism , 2001 .
[67] Yoav Goldberg,et al. Adversarial Removal of Demographic Attributes from Text Data , 2018, EMNLP.
[68] Samuel R. Bowman,et al. CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models , 2020, EMNLP.
[69] A. Paivio,et al. Concreteness, imagery, and meaningfulness values for 925 nouns. , 1968, Journal of experimental psychology.
[70] Christiane Fellbaum,et al. An Analysis of WordNet’s Coverage of Gender Identity Using Twitter and The National Transgender Discrimination Survey , 2016, GWC.
[71] Isabelle Augenstein,et al. Quantifying gender bias towards politicians in cross-lingual language models , 2021, PloS one.
[72] Yejin Choi,et al. Connotation Frames of Power and Agency in Modern Films , 2017, EMNLP.
[73] Marcis Pinnis,et al. Mitigating Gender Bias in Machine Translation with Target Gender Annotations , 2020, WMT.
[74] Goran Glavas,et al. Are We Consistently Biased? Multidimensional Analysis of Biases in Distributional Word Vectors , 2019, *SEMEVAL.
[75] Christiane Fellbaum,et al. Mining Twitter as a First Step toward Assessing the Adequacy of Gender Identification Terms on Intake Forms , 2015, AMIA.
[76] Andy Way,et al. Getting Gender Right in Neural Machine Translation , 2019, EMNLP.
[77] M. Costa-jussà,et al. Fine-tuning Neural Machine Translation on Gender-Balanced Datasets , 2020, GEBNLP.
[78] Mai ElSherief,et al. Mitigating Gender Bias in Natural Language Processing: Literature Review , 2019, ACL.
[79] Philipp Koehn,et al. Re-evaluating the Role of Bleu in Machine Translation Research , 2006, EACL.
[80] Rada Mihalcea,et al. Women’s Syntactic Resilience and Men’s Grammatical Luck: Gender-Bias in Part-of-Speech Tagging and Dependency Parsing , 2019, ACL.
[81] Pedro A. Fuertes-Olivera. A corpus-based view of lexical gender in written Business English , 2007 .
[82] Yoav Goldberg,et al. Filling Gender & Number Gaps in Neural Machine Translation with Black-box Context Injection , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[83] José A. R. Fonollosa,et al. Towards Mitigating Gender Bias in a decoder-based Neural Machine Translation model by Adding Contextual Information , 2020, WINLP.
[84] Blake Lemoine,et al. Mitigating Unwanted Biases with Adversarial Learning , 2018, AIES.
[85] Marta R. Costa-jussà,et al. Gendered Ambiguous Pronoun (GAP) Shared Task at the Gender Bias in NLP Workshop 2019 , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[86] Solon Barocas,et al. Language (Technology) is Power: A Critical Survey of “Bias” in NLP , 2020, ACL.
[87] Yusu Qian,et al. Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function , 2019, ACL.
[88] Isabelle Augenstein,et al. Quantifying Gender Biases Towards Politicians on Reddit , 2021, ArXiv.
[89] David Bamman,et al. Gender identity and lexical variation in social media , 2012, 1210.4567.
[90] Saif Mohammad,et al. Obtaining Reliable Human Ratings of Valence, Arousal, and Dominance for 20,000 English Words , 2018, ACL.
[91] Rachel Rudinger,et al. Gender Bias in Coreference Resolution , 2018, NAACL.
[92] Sameep Mehta,et al. Analyze, Detect and Remove Gender Stereotyping from Bollywood Movies , 2018, FAT.
[93] Dirk Hovy,et al. Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter , 2016, NAACL.
[94] Yi Chern Tan,et al. Assessing Social and Intersectional Biases in Contextualized Word Representations , 2019, NeurIPS.
[95] Bill Byrne,et al. Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem , 2020, ACL.
[96] Lilja Øvrelid,et al. Gender and sentiment, critics and authors: a dataset of Norwegian book reviews , 2020, GEBNLP.
[97] Shikha Bordia,et al. Identifying and Reducing Gender Bias in Word-Level Language Models , 2019, NAACL.
[98] Jeff M. Phillips,et al. Attenuating Bias in Word Vectors , 2019, AISTATS.
[99] Thanassis Tiropanis,et al. The problem of identifying misogynist language on Twitter (and other online social spaces) , 2016, WebSci.
[100] Jackie Chi Kit Cheung,et al. The KnowRef Coreference Corpus: Removing Gender and Number Cues for Difficult Pronominal Anaphora Resolution , 2018, ACL.
[101] Yusu Qian,et al. Gender Stereotypes Differ between Male and Female Writings , 2019, ACL.
[102] Sonja Schmer-Galunder,et al. Relating Word Embedding Gender Biases to Gender Gaps: A Cross-Cultural Analysis , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[103] Emily M. Bender,et al. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 , 2021, FAccT.
[104] Luís C. Lamb,et al. Assessing gender bias in machine translation: a case study with Google Translate , 2018, Neural Computing and Applications.
[105] Nanyun Peng,et al. Towards Controllable Biases in Language Generation , 2020, FINDINGS.
[106] Xingce Bao,et al. Transfer Learning from Pre-trained BERT for Pronoun Resolution , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[107] Saif Mohammad,et al. CROWDSOURCING A WORD–EMOTION ASSOCIATION LEXICON , 2013, Comput. Intell..
[108] Brian Larson,et al. Gender as a Variable in Natural-Language Processing: Ethical Considerations , 2017, EthNLP@EACL.
[109] Robert Munro,et al. Detecting Independent Pronoun Bias with Partially-Synthetic Data Generation , 2020, EMNLP.
[110] Hector J. Levesque,et al. The Winograd Schema Challenge , 2011, AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning.
[111] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[112] Verena Rieser,et al. #MeToo Alexa: How Conversational Systems Respond to Sexual Harassment , 2018, EthNLP@NAACL-HLT.
[113] Jieyu Zhao,et al. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints , 2017, EMNLP.
[114] Claudia Wagner,et al. How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs? , 2021, EMNLP.
[115] Yang Trista Cao,et al. Toward Gender-Inclusive Coreference Resolution , 2019, ACL.
[116] Cheris Kramarae,et al. A Feminist Dictionary , 1985 .
[117] Jayadev Bhaskaran,et al. Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in Sentiment Analysis , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[118] Jieyu Zhao,et al. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods , 2018, NAACL.
[119] Nam Soo Kim,et al. On Measuring Gender Bias in Translation of Gender-neutral Pronouns , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[120] Corina Koolen,et al. These are not the Stereotypes You are Looking For: Bias and Fairness in Authorial Gender Attribution , 2017, EthNLP@EACL.
[121] Alfredo Maldonado,et al. Measuring Gender Bias in Word Embeddings across Domains and Discovering New Gender Bias Word Categories , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[122] Krithika Ramesh,et al. Evaluating Gender Bias in Hindi-English Machine Translation , 2021, GEBNLP.
[123] Yonatan Belinkov,et al. Causal Mediation Analysis for Interpreting Neural NLP: The Case of Gender Bias , 2020, ArXiv.
[124] Siva Reddy,et al. StereoSet: Measuring stereotypical bias in pretrained language models , 2020, ACL.
[125] Hinrich Schütze,et al. Analytical Methods for Interpretable Ultradense Word Embeddings , 2019, EMNLP.
[126] Fatemeh Torabi Asr,et al. The Gender Gap Tracker: Using Natural Language Processing to measure gender bias in media , 2021, PloS one.
[127] Catherine D'Ignazio. Data Feminism: Teaching and Learning for Justice , 2021, ITiCSE.
[128] Leonardo Neves,et al. On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning , 2021, NAACL.
[129] Tolga Bolukbasi,et al. Debiasing Embeddings for Reduced Gender Bias in Text Classification , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[130] Pascale Fung,et al. Reducing Gender Bias in Abusive Language Detection , 2018, EMNLP.
[131] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[132] Yejin Choi,et al. Event2Mind: Commonsense Inference on Events, Intents, and Reactions , 2018, ACL.
[133] Stefan Fruehauf,et al. Measuring Sex Stereotypes A Multination Study , 2016 .
[134] Jason Baldridge,et al. Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns , 2018, TACL.
[135] Paolo Rosso,et al. Automatic Identification and Classification of Misogynistic Language on Twitter , 2018, NLDB.
[136] B. Byrne,et al. Neural Machine Translation Doesn’t Translate Gender Coreference Right Unless You Make It , 2020, GEBNLP.
[137] Yejin Choi,et al. PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction , 2020, EMNLP.
[138] Hinrich Schütze,et al. Monolingual and Multilingual Reduction of Gender Bias in Contextualized Representations , 2020, COLING.
[139] Ryan Cotterell,et al. Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology , 2019, ACL.
[140] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[141] Michael S. Bernstein,et al. Shirtless and Dangerous: Quantifying Linguistic Signals of Gender Bias in an Online Fiction Writing Community , 2016, ICWSM.
[142] Timothy Baldwin,et al. Towards Robust and Privacy-preserving Text Representations , 2018, ACL.
[143] Daniel Jurafsky,et al. Word embeddings quantify 100 years of gender and ethnic stereotypes , 2017, Proceedings of the National Academy of Sciences.
[144] David Bamman,et al. Unsupervised Discovery of Biographical Structure from Text , 2014, TACL.
[145] Yulia Tsvetkov,et al. RtGender: A Corpus for Studying Differential Responses to Gender , 2018, LREC.
[146] Danushka Bollegala,et al. Dictionary-based Debiasing of Pre-trained Word Embeddings , 2021, EACL.
[147] Michael Carl,et al. Controlling Gender Equality with Shallow NLP Techniques , 2004, COLING.
[148] Jieyu Zhao,et al. Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer , 2020, ACL.
[149] Vivek Srikumar,et al. On Measuring and Mitigating Biased Inferences of Word Embeddings , 2019, AAAI.
[150] Yoav Goldberg,et al. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them , 2019, NAACL-HLT.
[151] Xiang Ren,et al. Contextualizing Hate Speech Classifiers with Post-hoc Explanation , 2020, ACL.
[152] Malvina Nissim,et al. Unmasking Contextual Stereotypes: Measuring and Mitigating BERT’s Gender Bias , 2020, GEBNLP.
[153] Siân Brooke,et al. “Condescending, Rude, Assholes”: Framing gender and hostility on Stack Overflow , 2019, Proceedings of the Third Workshop on Abusive Language Online.