What are the Biases in My Word Embedding?

This paper presents an algorithm for enumerating biases in word embeddings. The algorithm exposes a large number of offensive associations related to sensitive features such as race and gender on publicly available embeddings, including a supposedly "debiased" embedding. These biases are concerning in light of the widespread use of word embeddings. The associations are identified by geometric patterns in word embeddings that run parallel between people's names and common lower-case tokens. The algorithm is highly unsupervised: it does not even require the sensitive features to be pre-specified. This is desirable because: (a) many forms of discrimination?such as racial discrimination-are linked to social constructs that may vary depending on the context, rather than to categories with fixed definitions; and (b) it makes it easier to identify biases against intersectional groups, which depend on combinations of sensitive features. The inputs to our algorithm are a list of target tokens, e.g. names, and a word embedding. It outputs a number of Word Embedding Association Tests (WEATs) that capture various biases present in the data. We illustrate the utility of our approach on publicly available word embeddings and lists of names, and evaluate its output using crowdsourcing. We also show how removing names may not remove potential proxy bias.

[1]  Y. Benjamini,et al.  Controlling the false discovery rate: a practical and powerful approach to multiple testing , 1995 .

[2]  A. Greenwald,et al.  Measuring individual differences in implicit cognition: the implicit association test. , 1998, Journal of personality and social psychology.

[3]  Ellen D. Wu “They Call Me Bruce, But They Won't Call Me Bruce Jones:” Asian American Naming Preferences and Patterns , 1999 .

[4]  M. Banaji,et al.  American = White? , 2005, Journal of personality and social psychology.

[5]  Lars Penke,et al.  Single-attribute implicit association tests (SA-IAT) for the assessment of unipolar constructs. The case of sociosexuality. , 2006, Experimental psychology.

[6]  Ross B. Steinman,et al.  The single category implicit association test as a measure of implicit social cognition. , 2006, Journal of personality and social psychology.

[7]  Malte Friese,et al.  Reliability and validity of the Single‐Target IAT (ST‐IAT): assessing automatic affect towards multiple attitude objects , 2008 .

[8]  E. Rothblum,et al.  Book Review: The N Word: Who Can Say It. Who Shouldn’t. And Why , 2009 .

[9]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[10]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[11]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[12]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.

[13]  Claudia Bianchi,et al.  Slurs and appropriation: An echoic account , 2014 .

[14]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[15]  Adam Tauman Kalai,et al.  Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.

[16]  Md. Naimul Hoque,et al.  Evaluating gender portrayal in Bangladeshi TV , 2017, ArXiv.

[17]  D. Sculley,et al.  No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World , 2017, 1711.08536.

[18]  Arvind Narayanan,et al.  Semantics derived automatically from language corpora contain human-like biases , 2016, Science.

[19]  Timnit Gebru,et al.  Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.

[20]  Seth Neel,et al.  Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness , 2017, ICML.

[21]  Tomas Mikolov,et al.  Advances in Pre-Training Distributed Word Representations , 2017, LREC.

[22]  Daniel Jurafsky,et al.  Word embeddings quantify 100 years of gender and ethnic stereotypes , 2017, Proceedings of the National Academy of Sciences.

[23]  Matt Taddy,et al.  The Geometry of Culture: Analyzing the Meanings of Class through Word Embeddings , 2018, American Sociological Review.

[24]  مسعود رسول آبادی,et al.  2011 , 2012, The Winning Cars of the Indianapolis 500.