暂无分享,去创建一个
Adam Tauman Kalai | Maria De-Arteaga | Lester Mackey | Myra Cheng | Lester W. Mackey | A. Kalai | Maria De-Arteaga | M. Cheng
[1] Selma Tekir,et al. Gender Bias in Occupation Classification from the New York Times Obituaries , 2022, Deu Muhendislik Fakultesi Fen ve Muhendislik.
[2] Emre Kıcıman,et al. Investigations of Performance and Bias in Human-AI Teamwork in Hiring , 2022, AAAI.
[3] Margaret Mitchell,et al. Measuring Model Biases in the Absence of Ground Truth , 2021, AIES.
[4] Yejin Choi,et al. Challenges in Automated Debiasing for Toxic Language Detection , 2021, EACL.
[5] Siva Reddy,et al. StereoSet: Measuring stereotypical bias in pretrained language models , 2020, ACL.
[6] O. Keyes. You Keep Using That Word: Ways of Thinking about Gender in Computing Research , 2021 .
[7] David Mimno,et al. Bad Seeds: Evaluating Lexical Methods for Bias Measurement , 2021, ACL.
[8] Hanna M. Wallach,et al. Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets , 2021, ACL.
[9] Malvina Nissim,et al. Unmasking Contextual Stereotypes: Measuring and Mitigating BERT’s Gender Bias , 2020, GEBNLP.
[10] Samuel R. Bowman,et al. CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models , 2020, EMNLP.
[11] Tanmoy Chakraborty,et al. Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased Proximities in Word Embeddings , 2020, Transactions of the Association for Computational Linguistics.
[12] Solon Barocas,et al. Language (Technology) is Power: A Critical Survey of “Bias” in NLP , 2020, ACL.
[13] Hanna M. Wallach,et al. Fairlearn: A toolkit for assessing and improving fairness in AI , 2020 .
[14] Ben Y. Zhao,et al. Detecting Gender Stereotypes: Lexicon vs. Supervised Learning Methods , 2020, CHI.
[15] Yulia Tsvetkov,et al. Unsupervised Discovery of Implicit Gender Bias , 2020, EMNLP.
[16] Luke Stark,et al. "I Don't Want Someone to Watch Me While I'm Working": Gendered Views of Facial Recognition Technology in Workplace Surveillance , 2020, J. Assoc. Inf. Sci. Technol..
[17] Emily Denton,et al. Diversity and Inclusion Metrics in Subset Selection , 2020, AIES.
[18] Lily Hu,et al. What's sex got to do with machine learning? , 2020, FAT*.
[19] Emily Denton,et al. Towards a critical race methodology in algorithmic fairness , 2019, FAT*.
[20] Yang Trista Cao,et al. Toward Gender-Inclusive Coreference Resolution , 2019, ACL.
[21] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[22] F. Calmon,et al. Predictive Multiplicity in Classification , 2019, ICML.
[23] Joel Nothman,et al. SciPy 1.0-Fundamental Algorithms for Scientific Computing in Python , 2019, ArXiv.
[24] Solon Barocas,et al. Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices , 2019, SSRN Electronic Journal.
[25] Jed R. Brubaker,et al. How Computers See Gender , 2019, Proc. ACM Hum. Comput. Interact..
[26] Lina Dencik,et al. What does it mean to 'solve' the problem of discrimination in hiring?: social, technical and legal perspectives from the UK on automated hiring systems , 2019, FAT*.
[27] Kori Inkpen Quinn,et al. What You See Is What You Get? The Impact of Representation Criteria on Human Bias in Hiring , 2019, HCOMP.
[28] Yunfeng Zhang,et al. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias , 2019, IBM Journal of Research and Development.
[29] ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , 2019 .
[30] Sahin Cem Geyik,et al. Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search , 2019, KDD.
[31] Alexandra Chouldechova,et al. What’s in a Name? Reducing Bias in Bios without Access to Protected Attributes , 2019, NAACL.
[32] Shikha Bordia,et al. Identifying and Reducing Gender Bias in Word-Level Language Models , 2019, NAACL.
[33] Yoav Goldberg,et al. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them , 2019, NAACL-HLT.
[34] Alexandra Chouldechova,et al. Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting , 2019, FAT.
[35] Adam Tauman Kalai,et al. What are the Biases in My Word Embedding? , 2018, AIES.
[36] Kush R. Varshney,et al. Bias Mitigation Post-processing for Individual and Group Fairness , 2018, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[37] Jieyu Zhao,et al. Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[38] B. Boonabaana,et al. Gender Norms, Technology Access, and Women Farmers’ Vulnerability to Climate Change in Sub-Saharan Africa , 2019, Climate Change Management.
[39] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[40] Aaron Rieke,et al. Help wanted: an examination of hiring algorithms, equity, and bias , 2018 .
[41] Pascale Fung,et al. Reducing Gender Bias in Abusive Language Detection , 2018, EMNLP.
[42] D. Fitch,et al. Review of "Algorithms of oppression: how search engines reinforce racism," by Noble, S. U. (2018). New York, New York: NYU Press. , 2018, CDQR.
[43] Guy N. Rothblum,et al. Multicalibration: Calibration for the (Computationally-Identifiable) Masses , 2018, ICML.
[44] Rachel Rudinger,et al. Gender Bias in Coreference Resolution , 2018, NAACL.
[45] C. Kendall,et al. For data’s sake: dilemmas in the measurement of gender minorities , 2018, Culture, health & sexuality.
[46] John Langford,et al. A Reductions Approach to Fair Classification , 2018, ICML.
[47] Blake Lemoine,et al. Mitigating Unwanted Biases with Adversarial Learning , 2018, AIES.
[48] Timnit Gebru,et al. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.
[49] Tomas Mikolov,et al. Advances in Pre-Training Distributed Word Representations , 2017, LREC.
[50] Daniel Jurafsky,et al. Word embeddings quantify 100 years of gender and ethnic stereotypes , 2017, Proceedings of the National Academy of Sciences.
[51] Seth Neel,et al. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness , 2017, ICML.
[52] Adam Tauman Kalai,et al. Decoupled Classifiers for Group-Fair and Efficient Machine Learning , 2017, FAT.
[53] Ben Y. Zhao,et al. Gender Bias in the Job Market , 2017, Proc. ACM Hum. Comput. Interact..
[54] Michela Menegatti,et al. Gender Bias and Sexism in Language , 2017 .
[55] Jon M. Kleinberg,et al. On Fairness and Calibration , 2017, NIPS.
[56] Brian Larson,et al. Gender as a Variable in Natural-Language Processing: Ethical Considerations , 2017, EthNLP@EACL.
[57] Chandler May,et al. Social Bias in Elicited Natural Language Inferences , 2017, EthNLP@EACL.
[58] Kathleen M. Carley,et al. Girls Rule, Boys Drool: Extracting Semantic and Affective Stereotypes from Twitter , 2017, CSCW.
[59] Arvind Narayanan,et al. Semantics derived automatically from language corpora contain human-like biases , 2016, Science.
[60] Yonatan Belinkov,et al. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks , 2016, ICLR.
[61] Tomas Mikolov,et al. Enriching Word Vectors with Subword Information , 2016, TACL.
[62] Kush R. Varshney,et al. Optimized Pre-Processing for Discrimination Prevention , 2017, NIPS.
[63] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[64] Adam Tauman Kalai,et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.
[65] M. Sen,et al. Race as a Bundle of Sticks: Designs that Estimate Effects of Seemingly Immutable Characteristics , 2016 .
[66] David R. Hekman,et al. If There’s Only One Woman in Your Candidate Pool, There’s Statistically No Chance She’ll Be Hired , 2016 .
[67] David García,et al. It's a Man's Wikipedia? Assessing Gender Inequality in an Online Encyclopedia , 2015, ICWSM.
[68] Nathan Ensmenger,et al. “Beards, Sandals, and Other Signs of Rugged Individualism”: Masculine Culture within the Computing Professions , 2015, Osiris.
[69] Stacey L. Williams,et al. Perceptions of Female Offenders: How Stereotypes and Social Norms Affect Criminal Justice Responses , 2014 .
[70] Rosamund Moon. From gorgeous to grumpy: adjectives, age, and gender , 2013 .
[71] Xiangliang Zhang,et al. Decision Theory for Discrimination-Aware Classification , 2012, 2012 IEEE 12th International Conference on Data Mining.
[72] Toniann Pitassi,et al. Fairness through awareness , 2011, ITCS '12.
[73] M. Heilman. Gender stereotypes and workplace bias , 2012 .
[74] Christa Tobler,et al. Trans and intersex people : discrimination on the grounds of sex, gender identity and gender expression , 2012 .
[75] Toon Calders,et al. Data preprocessing techniques for classification without discrimination , 2011, Knowledge and Information Systems.
[76] Skipper Seabold,et al. Statsmodels: Econometric and Statistical Modeling with Python , 2010, SciPy.
[77] Randi C. Martin,et al. Gender and letters of recommendation for academia: agentic and communal differences. , 2009, The Journal of applied psychology.
[78] A. Dainty,et al. How Women Engineers Do and Undo Gender: Consequences for Gender Equality , 2009 .
[79] M. Leary,et al. Handbook of individual differences in social behavior , 2009 .
[80] S. Shields,et al. Gender: An Intersectionality Perspective , 2008 .
[81] Aaron C. Kay,et al. Exposure to benevolent sexism and complementary gender stereotypes: consequences for specific and diffuse forms of system justification. , 2005, Journal of personality and social psychology.
[82] Christopher B. Mayhorn,et al. Champagne, beer, or coffee? A corpus of gender-related and neutral words , 2004, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.
[83] Anat Rachel Shimoni,et al. Gender, genre, and writing style in formal written texts , 2003 .
[84] M. Heilman. Description and prescription: How gender stereotypes prevent women's ascent up the organizational ladder. , 2001 .
[85] Jennifer S. Light,et al. When Computers Were Women , 1999 .
[86] Y. Benjamini,et al. Controlling the false discovery rate: a practical and powerful approach to multiple testing , 1995 .
[87] Patricia S. Mann. Gender Trouble: Feminism and the Subversion of Identity , 1992 .
[88] K. Crenshaw. Mapping the margins: intersectionality, identity politics, and violence against women of color , 1991 .
[89] J. Butler. Gender Trouble: Feminism and the Subversion of Identity , 1990 .
[90] R. Shprintzen,et al. What's in a name? , 1990, The Cleft palate journal.
[91] L. Doob. The psychology of social norms. , 1937 .