Relating Product Data to Process Data from Computer-Based Competency Assessment

Competency measurement typically focuses on task outcomes. Taking process data into account (i.e., processing time and steps) can provide new insights into construct-related solution behavior, or confirm assumptions that govern task design. This chapter summarizes four studies to illustrate the potential of behavioral process data for explaining task success. It also shows that generic process measures such as time on task may have different relations to task success, depending on the features of the task and the test-taker. The first study addresses differential effects of time on task on success across tasks used in the OECD Programme for the International Assessment of Adult Competencies (PIAAC). The second study, also based on PIAAC data, investigates at a fine-grained level, how the time spent on automatable subtasks in problem-solving tasks relates to task success. The third study addresses how the number of steps taken during problem solving predicts success in PIAAC problem-solving tasks. In a fourth study, we explore whether successful test-takers can be clustered on the basis of various behavioral process indicators that reflect information problem solving. Finally, we address how to handle unstructured and large sets of process data, and briefly present a process data extraction tool.

[1]  Saskia Brand-Gruwel,et al.  A descriptive model of information problem solving while using internet , 2009, Comput. Educ..

[2]  Willem J. van der Linden,et al.  Linear Models for Optimal Test Design , 2005 .

[3]  van der Linden,et al.  A hierarchical framework for modeling speed and accuracy on test items , 2007 .

[4]  Tobias Richter,et al.  Strategische Verarbeitung beim Lernen mit Text und Hypertext , 2005 .

[5]  J. Sweller,et al.  Cognitive Load Theory and Complex Learning: Recent Developments and Future Directions , 2005 .

[6]  Andreas Frey,et al.  Too hard, too easy, or just right? The relationship between effort or boredom and ability-difficulty fit , 2013 .

[7]  D. Borsboom,et al.  The concept of validity. , 2004, Psychological review.

[8]  Walter Schneider,et al.  Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. , 1977 .

[9]  Johannes Naumann,et al.  Working memory capacity and reading skill moderate the effectiveness of strategy training in learning from hypertext , 2008 .

[10]  Walter Kintsch,et al.  Reading Strategies and Hypertext Comprehension , 2005 .

[11]  April L. Zenisky,et al.  Innovative Item Formats in Computer-Based Testing: In Pursuit of Improved Construct Representation , 2006 .

[12]  K. A. Ericsson,et al.  Long-term working memory. , 1995, Psychological review.

[13]  Heiko Rölke,et al.  The time on task effect in reading and problem solving is moderated by task difficulty and skill: Insights from a computer-based large-scale assessment. , 2014 .

[14]  Kimberly A. Lawless,et al.  Understanding Hypertext Navigation through Cluster Analysis , 1996 .

[15]  Annette Stelter,et al.  Erfolgreiches Problemlösen in technologiebasierten Umgebungen: Wechselwirkungen zwischen Interaktionsschritten und Aufgabenanforderungen , 2014 .

[16]  W. D. Linden,et al.  Conceptual Issues in Response-Time Modeling. , 2009 .

[17]  Edward E. Roskam,et al.  Models for Speed and Time-Limit Tests , 1997 .

[18]  Frank Goldhammer,et al.  Speed of reasoning and its relation to reasoning ability , 2011 .

[19]  Jason L. G. Braasch,et al.  Comprehending and Learning From Internet Sources: Processing Patterns of Better and Poorer Learners , 2012, Reading Research Quarterly.

[20]  Barbara L. Grabowski,et al.  Patterns of Exploration and Learning with Hypermedia , 1995 .

[21]  R. Baayen,et al.  Mixed-effects modeling with crossed random effects for subjects and items , 2008 .