Towards a Trace-based Evaluation Model for Knowledge Acquisition and Training Resource Adaption

e-Assessment in an e-learning system is aimed at evaluating learners regarding their knowledge acquisition. Available assessment methods are usually used at the end of a training activity in order to state if a given learner has either passed or failed a training unit or level, based on the grading results obtained. Most of grading processes follow the SCORM norm in the matter (Scorm, 2006) and make use of duration and number of attempts to compute the scores. These information are valuable in grading but they can also be exploited to capture the learner bahaviour during a training activity, and then assess both learner knowledge acquisition and training resource quality in terms of adequacy. Therefore, we consider in this paper duration and number of attempts as modeled traces, upon which we build a theoretical model for automated evaluation of learners’ knowledge acquisition evolution as a training activity progresses. The values obtained can be used to adapt training strategies and resources to improve both learner’s knowledge level and e-learning platform quality.

[1]  Jacob Cohen Statistical Power Analysis for the Behavioral Sciences , 1969, The SAGE Encyclopedia of Research Design.

[2]  T. Crooks The Impact of Classroom Evaluation Practices on Students , 1988 .

[3]  Mazzara Marco,et al.  The Transition to Computer-Based Assessment New Approaches to Skills Assessment and Implications for Large-scale Testing , 2009 .

[4]  Yannick Prié,et al.  A Trace-Based Systems Framework : Models, Languages and Semantics , 2009 .

[5]  D. Nicol,et al.  Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice , 2006 .

[6]  Claudia Leacock,et al.  Automated evaluation of essays and short answers , 2001 .

[7]  Yannick Prié,et al.  A Trace-Based System for Technology-Enhanced Learning Systems Personalisation , 2009, 2009 Ninth IEEE International Conference on Advanced Learning Technologies.

[8]  Jakob Wandall,et al.  National Tests in Denmark--CAT as a Pedagogic Tool , 2011 .

[9]  Stephen G. Sireci,et al.  A Review of Models for Computer-Based Testing , 2012 .

[10]  Philip J. Guo Online python tutor: embeddable web-based program visualization for cs education , 2013, SIGCSE '13.

[11]  Thanos Patelis An Overview of Computer-Based Testing , 2000 .

[12]  Romain Martin New Possibilities and Challenges for Assessment through the Use of Technology , 2008 .

[13]  Pierre-Antoine Champin,et al.  Trace-Based Reasoning - Modeling Interaction Traces for Reasoning on Experiences , 2013, FLAIRS Conference.

[14]  Yannick Prié,et al.  Trace-Based Framework for Experience Management and Engineering , 2006, KES.

[15]  James H. Andrews,et al.  Testing using log file analysis: tools, methods, and issues , 1998, Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239).

[16]  Chad W. Buckendahl,et al.  A Review of Strategies for Validating Computer-Automated Scoring , 2002 .

[17]  Dietmar F. Rösner,et al.  E-Assessment as a Service , 2011, IEEE Transactions on Learning Technologies.

[18]  David M. Williamson,et al.  A Framework for Evaluation and Use of Automated Scoring , 2012 .

[19]  Patrick Griffin,et al.  Transforming Education: Assessing and Teaching 21st Century Skills , 2011 .

[20]  Un Système à Base de Traces pour la modélisation et l’élaboration d’indicateurs d’activités éducatives individuelles et collectives. Mise à l’épreuve sur Moodle. , 2010 .

[21]  Yannick Prié,et al.  A Trace-Based Framework for supporting Digital Object Memories , 2009, Intelligent Environments.

[22]  Rohit Kumar,et al.  First Evaluation of the Physics Instantiation of a Problem-Solving-Based Online Learning Platform , 2015, AIED.

[23]  Anastasios A. Economides,et al.  Learning Analytics : Intelligent Decision Support Systems for Learning Environments , 2015 .

[24]  D. Nicol E‐assessment by design: using multiple‐choice tests to good effect , 2007 .