A key task emerging in item analysis is identification of what constitutes valid and reliable measurement information, and what data support proposed score interpretations. Measurement information takes on many forms with computerized tests. An enormous amount of data is gathered from technology-based items, tracing every click and movement of the mouse and time stamping actions taken, and the data recorded falls into two general categories: process and outcomes. Outcomes are traditional scored answers that students provides in response to prompts, but technology-based item types also provide information regarding the process that students used to answer items. The first consideration to the practical use of such data is the nature of the data generated when learners complete complex assessment tasks. The chapter we propose serves to discuss some possible methodological strategies that could be used to analyze data from such technology-rich testing tasks.
[1]
Gregory K. W. K. Chung,et al.
Identifying Key Features of Student Performance in Educational Video Games and Simulations through Cluster Analysis
,
2012,
EDM 2012.
[2]
Y. Rosen,et al.
Making Student Thinking Visible through a Concept Map in Computer-Based Assessment of Critical Thinking
,
2014
.
[3]
Jason M. Harley,et al.
Clustering and Profiling Students According to their Interactions with an Intelligent Tutoring System Fostering Self-Regulated Learning
,
2013,
EDM 2013.
[4]
Mark J. Gierl,et al.
Automatic item generation : theory and practice
,
2012
.
[5]
John S. Kinnebrew,et al.
A Contextualized, Differential Sequence Mining Method to Derive Students' Learning Behavior Patterns
,
2013,
EDM 2013.
[6]
J. D. L. Torre,et al.
The Generalized DINA Model Framework.
,
2011
.