Learnersourcing Quality Assessment of Explanations for Peer Instruction

This study reports on the application of text mining and machine learning methods in the context of asynchronous peer instruction, with the objective of automatically identifying high quality student explanations. Our study compares the performance of state-of-the-art methods across different reference datasets and validation schemes. We demonstrate that when we extend the task of argument quality assessment along the dimensions of convincingness, from curated datasets, to data from a real learning environment, new challenges arise, and simpler vector space models can perform as well as a state-of-the-art neural approach.

[1]  Neil T. Heffernan,et al.  AXIS: Generating Explanations at Scale with Learnersourcing and Machine Learning , 2016, L@S.

[2]  Michal Jacovi,et al.  Automatic Argument Quality Assessment - New Datasets and Methods , 2019, EMNLP.

[3]  Klaus Obermayer,et al.  Support vector learning for ordinal regression , 1999 .

[4]  A. Pollitt The method of Adaptive Comparative Judgement , 2012 .

[5]  Iryna Gurevych,et al.  Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM , 2016, ACL.

[6]  John Hamer,et al.  PeerWise: students sharing their multiple choice questions , 2008, ICER '08.

[7]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[8]  Michel C. Desmarais,et al.  DALITE: Asynchronous Peer Instruction for MOOCs , 2016, EC-TEL.

[9]  Ani Nenkova,et al.  What Makes Writing Great? First Experiments on Article Quality Prediction in the Science Journalism Domain , 2013, TACL.

[10]  Marie-Francine Moens,et al.  Automatic detection of arguments in legal texts , 2007, ICAIL.

[11]  Ido Roll,et al.  ComPAIR: A New Online Tool Using Adaptive Comparative Judgement to Support Learning with Peer Feedback , 2017 .

[12]  Iryna Gurevych,et al.  Finding Convincing Arguments Using Scalable Bayesian Preference Learning , 2018, TACL.

[13]  Michel C. Desmarais,et al.  Filtering non-relevant short answers in peer learning applications , 2019, EDM.

[14]  Thorsten Joachims,et al.  Optimizing search engines using clickthrough data , 2002, KDD.

[15]  Elizabeth S. Charles,et al.  Harnessing peer instruction in and out of class with myDALITE , 2019 .

[16]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[17]  E. Mazur,et al.  Peer Instruction: Ten years of experience and results , 2001 .

[18]  Krzysztof Z. Gajos,et al.  Learnersourcing Subgoal Labels for How-to Videos , 2015, CSCW.

[19]  Cristian Danescu-Niculescu-Mizil,et al.  Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions , 2016, WWW.

[20]  Paul Denny,et al.  The effect of virtual achievements on student engagement , 2013, CHI.

[21]  Joseph Jay Williams,et al.  RiPPLE: A Crowdsourced Adaptive Platform for Recommendation of Learning Activities , 2019, J. Learn. Anal..

[22]  Scott R. Klemmer,et al.  Juxtapeer: Comparative Peer Review Yields Higher Quality Feedback and Promotes Deeper Reflection , 2018, CHI.

[23]  Paul N. Bennett,et al.  Pairwise ranking aggregation in a crowdsourced setting , 2013, WSDM.

[24]  Vincent Ng,et al.  End-to-End Argumentation Mining in Student Essays , 2016, NAACL.

[25]  Paolo Torroni,et al.  Argumentation Mining , 2016, ACM Trans. Internet Techn..