Identifying Content Patterns in Peer Reviews Using Graph-based Cohesion

Peer-reviewing allows students to think critically about a subject and also learn from their classmates’ work. Students can learn to write effective reviews if they are provided feedback on the quality of their reviews. A review may contain summative or advisory content, or may identify problems in the author’s work. Reviewers can be helped to improve their feedback by receiving automated content-based feedback on the helpfulness of their reviews. In this paper we propose a cohesion-based technique to identify patterns that are representative of a review’s content type. We evaluate our patternbased content identification approach on data from two peerreviewing systems—Expertiza and SWoRD. Our approach achieves an accuracy of 67.07% and an f -measure of 0.67.

[1]  Rada Mihalcea,et al.  Graph-based Ranking Algorithms for Sentence Extraction, Applied to Text Summarization , 2004, ACL.

[2]  Rada Mihalcea,et al.  Topic Identification Using Wikipedia Graph Centrality , 2009, NAACL.

[3]  Edward F. Gehringer,et al.  Graph-Structures Matching for Review Relevance Identification , 2013, TextGraphs@EMNLP.

[4]  Edward F. Gehringer,et al.  A User Study on the Automated Assessment of Reviews , 2013, AIED Workshops.

[5]  Dragomir R. Radev,et al.  Centroid-based summarization of multiple documents , 2004, Inf. Process. Manag..

[6]  Edward F. Gehringer,et al.  A word-order based graph representation for relevance identification , 2012, CIKM.

[7]  Dragomir R. Radev,et al.  LexRank: Graph-based Lexical Centrality as Salience in Text Summarization , 2004, J. Artif. Intell. Res..

[8]  Daniel Gildea,et al.  Plurality, Negation, and Quantification: Towards Comprehensive Quantifier Scope Disambiguation , 2013, ACL.

[9]  Christian D. Schunn,et al.  Assessing Reviewer's Performance Based on Mining Problem Localization in Peer-Review Data , 2010, EDM.

[10]  Kevin D. Ashley,et al.  Eliciting Informative Feedback in Peer Review: Importance of Problem-Specific Scaffolding , 2010, Intelligent Tutoring Systems.

[11]  Wendy W. Chapman,et al.  A Simple Algorithm for Identifying Negated Findings and Diseases in Discharge Summaries , 2001, J. Biomed. Informatics.

[12]  Michael I. Jordan,et al.  Latent Dirichlet Allocation , 2001, J. Mach. Learn. Res..

[13]  Christian D. Schunn,et al.  A validation study of students' end comments: Comparing comments by students, a writing instructor, and a content instructor , 2009 .

[14]  Edward F. Gehringer Expertiza: Managing Feedback in Collaborative Learning , 2010 .

[15]  Christiane Fellbaum,et al.  Book Reviews: WordNet: An Electronic Lexical Database , 1999, CL.

[16]  Susagna Tubau Negative concord in English and Romance : syntax-morphology interface conditions on the expression of negation , 2008 .

[17]  Christian D. Schunn,et al.  The nature of feedback: how different types of peer feedback affect writing performance , 2009 .

[18]  Balaraman Ravindran,et al.  Determining Review Coverage by Extracting Topic Sentences Using A Graph-based Clustering Approach , 2013, EDM.

[19]  Kwangsu Cho Machine Classification of Peer Comments in Physics , 2008, EDM.

[20]  Sanda M. Harabagiu,et al.  Negation, Contrast and Contradiction in Text Processing , 2006, AAAI.

[21]  Bing Liu,et al.  Opinion observer: analyzing and comparing opinions on the Web , 2005, WWW '05.

[22]  Dan Klein,et al.  Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network , 2003, NAACL.

[23]  Bernd Bohnet,et al.  Very high accuracy and fast dependency parsing is not a contradiction , 2010, COLING 2010.

[24]  B. Everitt,et al.  Large sample standard errors of kappa and weighted kappa. , 1969 .

[25]  Stephen P. Balfour,et al.  Assessing Writing in MOOCs: Automated Essay Scoring and Calibrated Peer Review™. , 2013 .