Structured Representation Learning for Online Debate Stance Prediction

Online debates can help provide valuable information about various perspectives on a wide range of issues. However, understanding the stances expressed in these debates is a highly challenging task, which requires modeling both textual content and users’ conversational interactions. Current approaches take a collective classification approach, which ignores the relationships between different debate topics. In this work, we suggest to view this task as a representation learning problem, and embed the text and authors jointly based on their interactions. We evaluate our model over the Internet Argumentation Corpus, and compare different approaches for structural information embedding. Experimental results show that our model can achieve significantly better results compared to previous competitive models.

[1]  Claire Cardie,et al.  Investigating LSTMs for Joint Extraction of Opinion Entities and Relations , 2016, ACL.

[2]  James R. Foulds,et al.  Joint Models of Disagreement and Stance in Online Debate , 2015, ACL.

[3]  Kalina Bontcheva,et al.  Stance Detection with Bidirectional Conditional Encoding , 2016, EMNLP.

[4]  Nina Wacholder,et al.  Analyzing Argumentative Discourse Units in Online Interactions , 2014, ArgMining@ACL.

[5]  Ashish Vaswani,et al.  Supertagging With LSTMs , 2016, NAACL.

[6]  Vincent Ng,et al.  Stance Classification of Ideological Debates: Data, Models, Features, and Constraints , 2013, IJCNLP.

[7]  Lu Wang,et al.  Weakly-Guided User Stance Prediction via Joint Modeling of Content and Social Interaction , 2017, CIKM.

[8]  Marilyn A. Walker,et al.  Cats Rule and Dogs Drool!: Classifying Stance in Online Debate , 2011, WASSA@ACL.

[9]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[10]  Vincent Ng,et al.  Why are You Taking this Stance? Identifying and Classifying Reasons in Ideological Debates , 2014, EMNLP.

[11]  Quoc V. Le,et al.  Distributed Representations of Sentences and Documents , 2014, ICML.

[12]  Marilyn A. Walker,et al.  Stance Classification using Dialogic Properties of Persuasion , 2012, NAACL.

[13]  Claire Cardie,et al.  The Power of Negative Thinking: Exploiting Label Disagreement in the Min-cut Classification Framework , 2008, COLING.

[14]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[15]  Saif Mohammad,et al.  SemEval-2016 Task 6: Detecting Stance in Tweets , 2016, *SEMEVAL.

[16]  Timothy Baldwin,et al.  Collective Classification of Congressional Floor-Debate Transcripts , 2011, ACL.

[17]  Dejing Dou,et al.  Weakly Supervised Tweet Stance Classification by Relational Bootstrapping , 2016, EMNLP.

[18]  Brian Ecker,et al.  Internet Argument Corpus 2.0: An SQL schema for Dialogic Social Media and the Corpora to go with it , 2016, LREC.

[19]  Marilyn A. Walker,et al.  A Corpus for Research on Deliberation and Debate , 2012, LREC.

[20]  Swapna Somasundaran,et al.  Recognizing Stances in Ideological On-Line Debates , 2010, HLT-NAACL 2010.

[21]  Slav Petrov,et al.  Globally Normalized Transition-Based Neural Networks , 2016, ACL.

[22]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.

[23]  Guillaume Lample,et al.  Neural Architectures for Named Entity Recognition , 2016, NAACL.

[24]  Dan Goldwasser,et al.  “All I know about politics is what I read in Twitter”: Weakly Supervised Models for Extracting Politicians’ Stances From Twitter , 2016, COLING.

[25]  Dan Klein,et al.  Neural CRF Parsing , 2015, ACL.

[26]  Daniel Jurafsky,et al.  Learning multi-faceted representations of individuals from heterogeneous evidence using neural networks , 2015, ArXiv.

[27]  Sanja Fidler,et al.  Skip-Thought Vectors , 2015, NIPS.