The quality of a PeerWise MCQ repository

PeerWise allows students to create a repository of multiple choice questions which can be attempted by their peers, and discussed between them online. PeerWise has been shown to foster deep learning and to improve students' performance. In this paper, we consider the nature of the repository created by a large, first year programming class, looking in particular at the quality attributes of Coverage, Question Quality, Difficulty and Indexing. The effect of student ability (as measured by a class test given before use of PeerWise) on the contributions to the repository is also investigated. We find that the overall quality of the repository is good, with only a few minor deficiencies, and conclude that these small defects are a small price to pay when compared with the substantial learning benefits that result from PeerWise use.

[1]  Stephen W. Draper,et al.  Catalytic assessment: understanding how MCQs and EVS can foster deep learning , 2009, Br. J. Educ. Technol..

[2]  Stephanie D. Teasley,et al.  Learning by Tagging: The Role of Social Tagging in Group Knowledge Formation , 2006 .

[3]  Judithe Sheard,et al.  Contributing student pedagogy , 2012, Comput. Sci. Educ..

[4]  John Hamer,et al.  PeerWise: students sharing their multiple choice questions , 2008, ICER '08.

[5]  J. Paul Gibson,et al.  Synthesis and analysis of automatic assessment methods in CS1: generating intelligent MCQs , 2005 .

[6]  Mark G. Simkin,et al.  How Well Do Multiple Choice Tests Evaluate Student Understanding in Computer Programming Classes? , 2003, J. Inf. Syst. Educ..

[7]  M. Paxton A Linguistic Perspective on Multiple Choice Questioning , 2000 .

[8]  J. Woods Beyond the comfort zone. , 2005, Australian nursing journal.

[9]  Peter Brusilovsky,et al.  Individualized exercises for self-assessment of programming knowledge: An evaluation of QuizPACK , 2005, JERC.

[10]  Benjamin S. Bloom,et al.  A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives , 2000 .

[11]  G. O. Wesolowsky,et al.  Detecting excessive similarity in answers on multiple choice exams , 2000 .

[12]  J. Paul Gibson,et al.  Synthesis and analysis of automatic assessment methods in CS1: generating intelligent MCQs , 2005, SIGCSE.

[13]  Amruth N. Kumar Generation of problems, answers, grade, and feedback---case study of a fully automated tutor , 2005, JERC.

[14]  Peter Bancroft,et al.  Using Multiple Choice Questions Effectively in Information Technology Education , 2004 .

[15]  Raymond Lister,et al.  Not seeing the forest for the trees: novice programmers and the SOLO taxonomy , 2006, ITICSE '06.

[16]  G. Brosvic,et al.  Immediate Feedback Assessment Technique Promotes Learning and Corrects Inaccurate first Responses , 2002 .

[17]  Rachel Or-Bach,et al.  Educational benefits of metadata creation by students , 2005, SGCS.

[18]  John Hamer,et al.  Coverage of course topics in a student generated MCQ repository , 2009, ITiCSE.

[19]  Beth Simon,et al.  Quality of student contributed questions using PeerWise , 2009, ACE '09.

[20]  Tim S. Roberts The use of multiple choice tests for formative and summative assessment , 2006 .