Supporting Mediated Peer-Evaluation to Grade Answers to Open-Ended Questions

We show an approach to semi-automatic grading of answers given by students to open ended questions (open answers). We use both peer-evaluation and teacher evaluation. A learner is modeled by her Knowledge and her assessments quality (Judgment). The data generated by the peer- and teacher- evaluations, and by the learner models is represented by a Bayesian Network, in which the grades of the answers, and the elements of the learner models, are variables, with values in a probability distribution. The initial state of the network is determined by the peer-assessment data. Then, each teacher’s grading of an answer triggers evidence propagation in the network. The framework is implemented in a web-based system. We present also an experimental activity, set to verify the effectiveness of the approach, in terms of correctness of system grading, amount of required teacher's work, and correlation of system outputs with teacher’s grades and student’s final exam grade.

[1]  D. Sluijsmans,et al.  The use of self-, peer and co-assessment in higher education: A review , 1999 .

[2]  Jeffrey S. Kane,et al.  Methods of peer assessment. , 1978 .

[3]  Maria De Marsico,et al.  Towards a quantitative evaluation of the relationship between the domain knowledge and the ability to assess peer work , 2015, 2015 International Conference on Information Technology Based Higher Education and Training (ITHET).

[4]  Kevin Palmer,et al.  On-line assessment and free-response input – a pedagogic and technical model for squaring the circle , 2003 .

[5]  Menucha Birenbaum,et al.  Effects of Response Format on Diagnostic Assessment of Scholastic Achievement , 1992 .

[6]  Zhenghao Chen,et al.  Tuned Models of Peer Assessment in MOOCs , 2013, EDM.

[7]  Marco Temperini,et al.  Collaborative Projects and Self Evaluation within a Social Reputation-Based Exercise-Sharing System , 2009, 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology.

[8]  Terje Väljataga,et al.  Web-based self- and peer-assessment of teachers’ digital competencies , 2012, World Wide Web.

[9]  P. Sadler,et al.  The Impact of Self- and Peer-Grading on Student Learning , 2006 .

[10]  William M. K. Trochim,et al.  Concept Mapping as an Alternative Approach for the Analysis of Open-Ended Survey Responses , 2002 .

[11]  Cristina Conati,et al.  Using Bayesian Networks to Manage Uncertainty in Student Modeling , 2002, User Modeling and User-Adapted Interaction.

[12]  Heng-Yu Ku,et al.  An investigation of the effects of reciprocal peer tutoring , 2009, Comput. Hum. Behav..

[13]  C. MacArthur,et al.  Student revision with peer and expert reviewing , 2010 .

[14]  Rafael Valencia-García,et al.  Semantic Web Technologies for supporting learning assessment , 2011, Inf. Sci..

[15]  P. Miller,et al.  The Effect of Scoring Criteria Specificity on Peer and Self-assessment , 2003 .

[16]  N. Falchikov,et al.  Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks , 2000 .

[17]  J. Metcalfe,et al.  Metacognition : knowing about knowing , 1994 .

[18]  Marco Temperini,et al.  Dealing with Open-Answer Questions in a Peer-Assessment Environment , 2012, ICWL.

[19]  Hang Li,et al.  Mining Open Answers in Questionnaire Data , 2001, IEEE Intell. Syst..

[20]  Benjamin S. Bloom,et al.  Taxonomy of Educational Objectives: The Classification of Educational Goals. , 1957 .

[21]  Naïma El-Kechaï,et al.  Evaluating the Performance of a Diagnosis System in School Algebra , 2011, ICWL.

[22]  Satoshi Morinaga,et al.  Mining product reputations on the Web , 2002, KDD.

[23]  Sebastián Ventura,et al.  Educational Data Mining: A Review of the State of the Art , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[24]  Marco Temperini,et al.  Adding time and propedeuticity dependencies to the OpenAnswer Bayesian model of peer-assessment , 2014 .

[25]  Benjamin S. Bloom,et al.  A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives , 2000 .

[26]  Adnan Darwiche,et al.  Inference in belief networks: A procedural guide , 1996, Int. J. Approx. Reason..

[27]  Marco Temperini,et al.  OpenAnswer, a framework to support teacher's management of open answers through peer assessment , 2013, 2013 IEEE Frontiers in Education Conference (FIE).

[28]  Hugh Somervell,et al.  Issues in Assessment, Enterprise and Higher Education: the case for self‐peer and collaborative assessment , 1993 .

[29]  Marco Temperini,et al.  Supporting Assessment of Open Answers in a Didactic Setting , 2012, 2012 IEEE 12th International Conference on Advanced Learning Technologies.