Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds
暂无分享,去创建一个
[1] Daniel Jurafsky,et al. Word embeddings quantify 100 years of gender and ethnic stereotypes , 2017, Proceedings of the National Academy of Sciences.
[2] Arvind Narayanan,et al. Semantics derived automatically from language corpora contain human-like biases , 2016, Science.
[3] Adam Tauman Kalai,et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.
[4] Graeme Hirst,et al. Towards Understanding Linear Word Analogies , 2018, ACL.
[5] Alan W Black,et al. Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings , 2019, NAACL.
[6] Zeyu Li,et al. Learning Gender-Neutral Word Embeddings , 2018, EMNLP.
[7] Kawin Ethayarajh. Rotate King to get Queen: Word Relationships as Orthogonal Transformations in Embedding Space , 2019, EMNLP/IJCNLP.
[8] Jieyu Zhao,et al. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods , 2018, NAACL.
[9] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[10] Graeme Hirst,et al. Understanding Undesirable Word Embedding Associations , 2019, ACL.
[11] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[12] Rachel Rudinger,et al. Gender Bias in Coreference Resolution , 2018, NAACL.
[13] Saif Mohammad,et al. Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems , 2018, *SEMEVAL.