A Diagnostic Tree Model for Adaptive Assessment of Complex Cognitive Processes Using Multidimensional Response Options

A tree model for diagnostic educational testing is described along with Monte Carlo simulations designed to evaluate measurement accuracy based on the model. The model is implemented in an assessment of inferential reading comprehension, the Multiple-Choice Online Causal Comprehension Assessment (MOCCA), through a sequential, multidimensional, computerized adaptive testing (CAT) strategy. Assessment of the first dimension, reading comprehension (RC), is based on the three-parameter logistic model. For diagnostic and intervention purposes, the second dimension, called process propensity (PP), is used to classify struggling students based on their pattern of incorrect responses. In the simulation studies, CAT item selection rules and stopping rules were varied to evaluate their effect on measurement accuracy along dimension RC and classification accuracy along dimension PP. For dimension RC, methods that improved accuracy tended to increase test length. For dimension PP, however, item selection and stopping rules increased classification accuracy without materially increasing test length. A small live-testing pilot study confirmed some of the findings of the simulation studies. Development of the assessment has been guided by psychometric theory, Monte Carlo simulation results, and a theory of instruction and diagnosis.

[1]  Daniel M. Bolt,et al.  A Mixture IRTree Model for Extreme Response Style: Accounting for Response Process Uncertainty , 2020, Educational and psychological measurement.

[2]  Ping Chen,et al.  Stopping rules for multi-category computerized classification testing. , 2020, The British journal of mathematical and statistical psychology.

[3]  Sarah E. Carlson,et al.  Can We Learn From Student Mistakes in a Formative, Reading Comprehension Assessment? , 2019 .

[4]  Mark L. Davison,et al.  Improving the Predictive Validity of Reading Comprehension Using Response Times of Correct Item Responses , 2019, Applied Measurement in Education.

[5]  Sarah E. Carlson,et al.  Constructing Subscores That Add Validity: A Case Study of Identifying Students at Risk , 2019, Educational and psychological measurement.

[6]  R. Wagner,et al.  The Reading Comprehension and Vocabulary Knowledge of Children With Poor Reading Comprehension Despite Adequate Decoding: Evidence From a Regression-Based Matching Approach , 2019, Journal of educational psychology.

[7]  Sarah E. Carlson,et al.  Preliminary Findings on the Computer-Administered Multiple-Choice Online Causal Comprehension Assessment, a Diagnostic Reading Comprehension Test , 2018 .

[8]  M. Davison,et al.  Spontaneous and imposed speed of cognitive test responses , 2017, The British journal of mathematical and statistical psychology.

[9]  K. Cain,et al.  Children's inference generation: The role of vocabulary and working memory. , 2015, Journal of experimental child psychology.

[10]  Youn Seon Lim,et al.  Efficient Models for Cognitive Diagnosis With Continuous and Mixed-Type Latent Variables , 2015, Applied psychological measurement.

[11]  Laine Bradshaw,et al.  Combining Item Response Theory and Diagnostic Classification Models: A Psychometric Model for Scaling Ability and Diagnosing Misconceptions , 2014, Psychometrika.

[12]  Kristen L. McMaster,et al.  Development of a new reading comprehension assessment: Identifying comprehension differences among readers , 2014 .

[13]  Kristen L. McMaster,et al.  Making Connections: Linking Cognitive Psychology and Intervention Research to Improve Comprehension of Struggling Readers , 2014 .

[14]  Jimmy de la Torre,et al.  A General Cognitive Diagnosis Model for Expert-Defined Polytomous Attributes , 2013 .

[15]  Mary Jane White,et al.  Making the right connections: Differential effects of reading intervention for subgroups of comprehenders , 2012 .

[16]  J Bruce Tomblin,et al.  Prevalence and Nature of Late-Emerging Poor Readers. , 2012, Journal of educational psychology.

[17]  Kate Nation,et al.  Suppressing Irrelevant Information from Working Memory: Evidence for Domain-Specific Deficits in Poor Comprehenders. , 2010 .

[18]  Daniel M. Bolt,et al.  On the Use of Factor-Analytic Multinomial Logit Item Response Models to Account for Individual Differences in Response Style , 2010 .

[19]  Beth Chance,et al.  ASSESSING STUDENTS’ CONCEPTUAL UNDERSTANDING AFTER A FIRST COURSE IN STATISTICS , 2007 .

[20]  Charles A. Perfetti,et al.  Reading Ability: Lexical Quality to Comprehension , 2007 .

[21]  David N. Rapp,et al.  Higher-Order Comprehension Processes in Struggling Readers: A Perspective for Research and Intervention , 2007 .

[22]  B. Junker,et al.  Cognitive Assessment Models with Few Assumptions, and Connections with Nonparametric Item Response Theory , 2001 .

[23]  Raymond J. Adams,et al.  The Multidimensional Random Coefficients Multinomial Logit Model , 1997 .

[24]  D. Hestenes,et al.  Force concept inventory , 1992 .

[25]  David J. Weiss,et al.  Improving Measurement Quality and Efficiency with Adaptive Testing , 1982 .

[26]  Paul De Boeck,et al.  Can fast and slow intelligence be differentiated , 2012 .