Inflating Topic Relevance with Ideology: A Case Study of Political Ideology Bias in Social Topic Detection Models

We investigate the impact of political ideology biases in training data. Through a set of comparison studies, we examine the propagation of biases in several widely-used NLP models and its effect on the overall retrieval accuracy. Our work highlights the susceptibility of large, complex models to propagating the biases from human-selected input, which may lead to a deterioration of retrieval accuracy, and the importance of controlling for these biases. Finally, as a way to mitigate the bias, we propose to learn a text representation that is invariant to political ideology while still judging topic relevance.

[1]  Margaret E. Roberts,et al.  Computer‐Assisted Keyword and Document Set Discovery from Unstructured Text , 2017 .

[2]  Mai ElSherief,et al.  Towards Understanding Gender Bias in Relation Extraction , 2019, ACL.

[3]  Daniel Jurafsky,et al.  Deconfounded Lexicon Induction for Interpretable Social Science , 2018, NAACL.

[4]  Jonathan Gratch,et al.  Analyzing Conservative and Liberal Blogs Related to the Construction of the 'Ground Zero Mosque' , 2011, CogSci.

[5]  Graham Neubig,et al.  Controllable Invariance through Adversarial Feature Learning , 2017, NIPS.

[6]  Brendan T. O'Connor,et al.  Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English , 2017, ArXiv.

[7]  Jesse Graham,et al.  Conservatives report, but liberals display, greater happiness , 2015, Science.

[8]  Jürgen Pfeffer,et al.  Population Bias in Geotagged Tweets , 2015, Proceedings of the International AAAI Conference on Web and Social Media.

[9]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[10]  Timothy Baldwin,et al.  Towards Robust and Privacy-preserving Text Representations , 2018, ACL.

[11]  Luke S. Zettlemoyer,et al.  Deep Contextualized Word Representations , 2018, NAACL.

[12]  Noah A. Smith,et al.  Evaluating Gender Bias in Machine Translation , 2019, ACL.

[13]  G. Lakoff Metaphor, Morality, and Politics Or, Why Conservatives Have Left Liberals In the Dust 1 , 1995 .

[14]  Taha Yasseri,et al.  A Biased Review of Biases in Twitter Studies on Political Collective Action , 2016, Front. Phys..

[15]  S. Gosling,et al.  Facebook as a research tool for the social sciences: Opportunities, challenges, ethical considerations, and practical guidelines. , 2015, The American psychologist.

[16]  Yee Whye Teh,et al.  Multiplicative Interactions and Where to Find Them , 2020, ICLR.

[17]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[18]  Zi Yin,et al.  On the Dimensionality of Word Embedding , 2018, NeurIPS.

[19]  François Laviolette,et al.  Domain-Adversarial Training of Neural Networks , 2015, J. Mach. Learn. Res..

[20]  Koby Crammer,et al.  Analysis of Representations for Domain Adaptation , 2006, NIPS.

[21]  Emily M. Bender,et al.  Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science , 2018, TACL.

[22]  Yu-Ru Lin,et al.  Quantifying Content Polarization on Twitter , 2017, 2017 IEEE 3rd International Conference on Collaboration and Internet Computing (CIC).

[23]  Martijn Schoonvelde,et al.  Liberals lecture, conservatives communicate: Analyzing complexity and ideology in 381,609 political speeches , 2019, PloS one.

[24]  Yulia Tsvetkov,et al.  Topics to Avoid: Demoting Latent Confounds in Text Classification , 2019, EMNLP.