Findings of the NLP4IF-2019 Shared Task on Fine-Grained Propaganda Detection

We present the shared task on Fine-Grained Propaganda Detection, which was organized as part of the NLP4IF workshop at EMNLP-IJCNLP 2019. There were two subtasks. FLC is a fragment-level task that asks for the identification of propagandist text fragments in a news article and also for the prediction of the specific propaganda technique used in each such fragment (18-way classification task). SLC is a sentence-level binary classification task asking to detect the sentences that contain propaganda. A total of 12 teams submitted systems for the FLC task, 25 teams did so for the SLC task, and 14 teams eventually submitted a system description paper. For both subtasks, most systems managed to beat the baseline by a sizable margin. The leaderboard and the data from the competition are available at this http URL.

[1]  Anthony Weston,et al.  A Rulebook for Arguments , 1987 .

[2]  Thomas A. Runkler,et al.  Neural Architectures for Fine-Grained Propaganda Detection in News , 2019, EMNLP.

[3]  Eunsol Choi,et al.  Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking , 2017, EMNLP.

[4]  Geoffrey E. Hinton,et al.  Dynamic Routing Between Capsules , 2017, NIPS.

[5]  Sumeet Dua,et al.  Divisive Language and Propaganda Detection using Multi-head Attention Transformers with Deep Learning BERT-based Language Models for Binary Classification , 2019, EMNLP.

[6]  Yin Yang,et al.  Fine-Grained Propaganda Detection with Fine-Tuned BERT , 2019, EMNLP.

[7]  Preslav Nakov,et al.  Fine-Grained Analysis of Propaganda in News Article , 2019, EMNLP.

[8]  Cristian Onose,et al.  Sentence-Level Propaganda Detection in News Articles with Transfer Learning and BERT-BiLSTM-Capsule Model , 2019, EMNLP.

[9]  Stanley G. Robertson The Straw Man Fallacy , 2008 .

[10]  Gabriel H. Teninbaum Reductio ad Hitlerum: Trumping the Judicial Nazi Card , 2009 .

[11]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[12]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[13]  Iryna Gurevych,et al.  Adapting Serious Game for Fallacious Argumentation to German: Pitfalls, Insights, and Best Practices , 2018, LREC.

[14]  Preslav Nakov,et al.  Proppy: A System to Unmask Propaganda in Online News , 2019, AAAI.

[15]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[16]  Luke S. Zettlemoyer,et al.  Deep Contextualized Word Representations , 2018, NAACL.

[17]  Wenjun Hou,et al.  CAUnLP at NLP4IF 2019 Shared Task: Context-Dependent BERT for Sentence-Level Propaganda Detection , 2019, EMNLP.

[18]  Tomas Mikolov,et al.  Enriching Word Vectors with Subword Information , 2016, TACL.

[19]  Wei Xu,et al.  Bidirectional LSTM-CRF Models for Sequence Tagging , 2015, ArXiv.

[20]  Samira Shaikh,et al.  JUSTDeep at NLP4IF 2019 Task 1: Propaganda Detection using Ensemble Deep Learning Models , 2019, EMNLP.

[21]  Iryna Gurevych,et al.  Argotario: Computational Argumentation Meets Serious Games , 2017, EMNLP.

[22]  Fransisca Niken Vitri Suprabandari American propaganda in john steinbeck`s the moon is down , 2007 .

[23]  Robyn Torok Symbiotic radicalisation strategies: Propaganda tools and neuro linguistic programming , 2015 .

[24]  Kartik Aggarwal,et al.  NSIT@NLP4IF-2019: Propaganda Detection from News Articles using Transfer Learning , 2019, EMNLP.

[25]  Jinfen Li,et al.  Detection of Propaganda Using Logistic Regression , 2019, EMNLP.

[26]  John. Hunter Brainwashing in a large group awareness training? : the classical conditioning hypothesis of brainwashing. , 2015 .

[27]  Preslav Nakov,et al.  Proppy: Organizing the news based on their propagandistic content , 2019, Inf. Process. Manag..

[28]  André Ferreira Cruz,et al.  On Sentence Representations for Propaganda Detection: From Handcrafted Features to Word Embeddings , 2019, EMNLP.

[29]  Sibel Adali,et al.  Sampling the News Producers: A Large News and Feature Data Set for the Study of the Complex Media Landscape , 2018, ICWSM.

[30]  Omer Levy,et al.  RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.

[31]  Ali Farhadi,et al.  Defending Against Neural Fake News , 2019, NeurIPS.

[32]  Smaranda Muresan,et al.  Fine-Tuned Neural Models for Propaganda Detection at the Sentence and Fragment levels , 2019, EMNLP.

[33]  Raymie McKerrow,et al.  Accounting for the force of the appeal to authority , 2011 .

[34]  Benno Stein,et al.  Before Name-Calling: Dynamics and Triggers of Ad Hominem Fallacies in Web Argumentation , 2018, NAACL.

[35]  Renee Hobbs,et al.  Teaching about Propaganda: An Examination of the Historical Roots of Media Literacy , 2014, Journal of Media Literacy Education.

[36]  Adam Ek,et al.  Synthetic Propaganda Embeddings To Train A Linear Projection , 2019, EMNLP.

[37]  Yiqing Hua,et al.  Understanding BERT performance in propaganda analysis , 2019, EMNLP.

[38]  Nan Hua,et al.  Universal Sentence Encoder , 2018, ArXiv.

[39]  Elena Kochkina,et al.  Cost-Sensitive BERT for Generalisable Sentence Classification on Imbalanced Data , 2020, EMNLP.

[40]  Douglas Walton Methods of Argumentation: The Straw Man Fallacy , 2013 .

[41]  Mahmoud Al-Ayyoub,et al.  Pretrained Ensemble Learning for Fine-Grained Propaganda Detection , 2019, EMNLP.