A Hybrid Engineering Process for Semi-automatic Item Generation

Test authors can generate test items (semi-) automatically with different approaches. On the one hand, bottom-up approaches consist of generating items from sources such as texts or domain models. However, relating the generated items to competence models, which define required knowledge and skills on a proficiency scale remains a challenge. On the other hand, top-down approaches use cognitive models and competence constructs to specify the knowledge and skills to be assessed. Unfortunately, on this high abstraction level it is impossible to identify which item elements can actually be generated automatically. In this paper we present a hybrid process which integrates both approaches. It aims at securing a traceability between the specification levels and making it possible to influence item generation during runtime, i.e., after designing all the intermediate models. In the context of the European project EAGLE, we use this process to generate items for information literacy with a focus on text comprehension.

[1]  Bert Bredeweg,et al.  Question generation and answering. DynaLearn, EC FP7 STREP project 231526, Deliverable D3.3 , 2011 .

[2]  Mark J. Gierl,et al.  Automatic item generation : theory and practice , 2012 .

[3]  Le An Ha,et al.  A computer-aided environment for generating multiple-choice test items , 2006, Natural Language Engineering.

[4]  Ioannis Stamelos,et al.  The Impact of Prompting in Technology-Enhanced Learning as Moderated by Students' Motivation and Metacognitive Skills , 2009, EC-TEL.

[5]  Eric Ras,et al.  Designing Formative and Adaptive Feedback Using Incremental User Models , 2016, ICWL.

[6]  Le An Ha,et al.  Generating Multiple-Choice Test Items from Medical Text: A Pilot Study , 2006, INLG.

[7]  Philipp Sonnleitner Using the LLTM to evaluate an item-generating system for reading comprehension , 2008 .

[8]  Benjamin S. Bloom,et al.  A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives , 2000 .

[9]  Andreas Papasalouros,et al.  Automated Transformation of SWRL Rules into Multiple-Choice Questions , 2011, FLAIRS.

[10]  M. Oliveri,et al.  The Learning Sciences in Educational Assessment: The Role of Cognitive Models , 2011, Alberta Journal of Educational Research.

[11]  Mark Wilson,et al.  Measuring Progressions: Assessment Structures Underlying a Learning Progression , 2009 .

[12]  Ben Liu SARAC: A Framework for Automatic Item Generation , 2009, 2009 Ninth IEEE International Conference on Advanced Learning Technologies.

[13]  Christian Gütl,et al.  Refined Distractor Generation with LSA and Stylometry for Automated Multiple Choice Question Generation , 2012, Australasian Conference on Artificial Intelligence.

[14]  Jean Véronis,et al.  Text Encoding Initiative , 1995, Springer Netherlands.

[15]  Valentin Grouès,et al.  Common vs. Expert knowledge: making the Semantic Web an educational model , 2012, LiLe@WWW.

[16]  Richard M. Luecht An Introduction to Assessment Engineering for Automatic Item Generation , 2012 .

[17]  Mark J. Gierl,et al.  Generating Items Under the Assessment Engineering Framework , 2012 .

[18]  Mark J. Gierl,et al.  Methods for Creating and Evaluating the Item Model Structure Used In Automatic Item Generation , 2012 .