Multi-task Learning in Argument Mining for Persuasive Online Discussions

We utilize multi-task learning to improve argument mining in persuasive online discussions, in which both micro-level and macro-level argumentation must be taken into consideration. Our models learn to identify argument components and the relations between them at the same time. We also tackle the low-precision which arises from imbalanced relation data by experimenting with SMOTE and XGBoost. Our approaches improve over baselines that use the same pre-trained language model but process the argument component task and two relation tasks separately. Furthermore, our results suggest that the tasks to be incorporated into multi-task learning should be taken into consideration as using all relevant tasks does not always lead to the best performance.

[1]  Vincent Ng,et al.  End-to-End Argumentation Mining in Student Essays , 2016, NAACL.

[2]  Bernard Moulin,et al.  A taxonomy of argumentation models used for knowledge representation , 2010, Artificial Intelligence Review.

[3]  Reiji Teramoto,et al.  Statistical Applications in Genetics and Molecular Biology Balanced Gradient Boosting from Imbalanced Data for Clinical Outcome Prediction , 2011 .

[4]  Iryna Gurevych,et al.  Neural End-to-End Learning for Computational Argumentation Mining , 2017, ACL.

[5]  Ruslan Salakhutdinov,et al.  Multi-Task Cross-Lingual Sequence Tagging from Scratch , 2016, ArXiv.

[6]  Nitesh V. Chawla,et al.  SMOTE: Synthetic Minority Over-sampling Technique , 2002, J. Artif. Intell. Res..

[7]  Tianqi Chen,et al.  XGBoost: A Scalable Tree Boosting System , 2016, KDD.

[8]  Kathleen McKeown,et al.  IMHO Fine-Tuning Improves Claim Detection , 2019, NAACL.

[9]  Manfred Stede,et al.  Joint prediction in MST-style discourse parsing for argumentation mining , 2015, EMNLP.

[10]  Christophe Mues,et al.  An experimental comparison of classification algorithms for imbalanced credit scoring data sets , 2012, Expert Syst. Appl..

[11]  Matthew N. Davies,et al.  An experimental comparison of classification algorithms for hierarchical prediction of protein function , 2007 .

[12]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[13]  Iryna Gurevych,et al.  Parsing Argumentation Structures in Persuasive Essays , 2016, CL.

[14]  Qingyu Chen,et al.  An Empirical Study of Multi-Task Learning on BERT for Biomedical Text Mining , 2020, BIONLP.

[15]  Smaranda Muresan,et al.  AMPERSAND: Argument Mining for PERSuAsive oNline Discussions , 2019, EMNLP.

[16]  Elena Musi,et al.  Analyzing the Semantic Types of Claims and Premises in an Online Persuasive Forum , 2017, ArgMining@EMNLP.

[17]  Iryna Gurevych,et al.  Multi-Task Learning for Argumentation Mining in Low-Resource Settings , 2018, NAACL.

[18]  Anders Søgaard,et al.  Deep multi-task learning with low level tasks supervised at lower layers , 2016, ACL.

[19]  Xiaodong Liu,et al.  Multi-Task Deep Neural Networks for Natural Language Understanding , 2019, ACL.