AN INVESTIGATION OF BIASES TOWARD QUEER USERS IN AI AND NATURAL LANGUAGE PROCESSING
暂无分享,去创建一个
Natural Language Processing (NLP) has gained attraction for its universal applications and importance in decision making in AI technology. Research has revealed that Google’s NLP API holds biases toward certain words; for example, it deciphers “homosexual” as holding a negative sentiment. This investigation focuses on applying NLP strategies to the queer virtual community, colloquially known as GayTwitter, to further investigate biases. Tweets from users of GayTwitter were comprised into a dataset to build, train, and test a sentiment analyzer. This sentiment analyzer employs Word2Vec, a NLP/AI technology, in conjunction with t-SNE technology which produce word embeddings of the tweets in the corpus. From this point alone, creating a unique dataset with hopes of reducing the bias found within NLP models showed a promising trajectory for formulating a method of mitigating biased AI technology.
[1] E. Horner. Queer identities and bisexual identities: What's the difference? , 2007 .
[2] Joanna Bryson,et al. Semantics derived automatically from language corpora contain human-like biases , 2016, Science.
[3] Amanda Levendowski,et al. How Copyright Law Can Fix Artificial Intelligence's Implicit Bias Problem , 2017 .
[4] M. Kosinski,et al. Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation From Facial Images , 2018, Journal of personality and social psychology.