The Effects of Using Different Procedures to Score Maze Measures

The purpose of this study was to examine how different scoring procedures affect interpretation of maze curriculum-based measurements. Fall and spring data were collected from 199 students receiving supplemental reading instruction. Maze probes were scored first by counting all correct maze choices, followed by four scoring variations designed to reduce the effect of random guessing. Pearson's r correlation coefficients were calculated among scoring procedures and between maze scores and a standardized measure of reading. In addition, t tests were conducted to compare fall to spring growth for each scoring procedure. Results indicated that scores derived from the different procedures are highly correlated, demonstrate criterion-related validity, and show fall-to-spring growth. Educators working with struggling readers may use any of the five scoring procedures to obtain technically sound scores.

[1]  Christine A. Espin,et al.  Reading Progress Monitoring for Secondary–School Students: Reliability, Validity, and Sensitivity to Growth of Reading–Aloud and Maze–Selection Measures , 2009 .

[2]  Teri Wallace,et al.  Literature Synthesis on Curriculum-Based Measurement in Reading , 2007 .

[3]  M. Burns,et al.  Relationship of reading fluency assessment data with state accountability test scores: A longitudinal comparison of grade levels , 2006 .

[4]  R. Gersten,et al.  RTI (Response to Intervention): Rethinking special education for students with reading difficulties (yet again) , 2006 .

[5]  Julie Alonzo,et al.  Grade‐Level Invariance of a Theoretical Causal Structure Predicting Reading Comprehension With Vocabulary and Oral Reading Fluency , 2005 .

[6]  Stanley L. Deno,et al.  Oral Reading and Maze Measures as Predictors of Success for English Learners on a State Standards Assessment , 2005 .

[7]  P. Johnson,et al.  Comparison of Grade-Level Controlled and Literature-Based Maze CBM Reading Passages , 2005 .

[8]  Lynn S. Fuchs,et al.  Curriculum-Based Measurement: Describing Competence, Enhancing Outcomes, Evaluating Treatment Effects, and Identifying Treatment Nonresponders , 2002, Journal of Cognitive Education and Psychology.

[9]  John L. Hosp,et al.  Curriculum-Based Measurement for Reading, Spelling, and Math: How to Do it and Why , 2003 .

[10]  Paul Muyskens,et al.  Problem‐Solving Model for Decision Making with High‐Incidence Disabilities: The Minneapolis Experience , 2003 .

[11]  L. Fuchs Assessing Intervention Responsiveness: Conceptual and Technical Issues , 2003 .

[12]  Rachel Brown-Chidsey,et al.  Sources of Variance in Curriculum-Based Measures of Silent Reading. , 2003 .

[13]  C. Espin,et al.  Technical Adequacy of the Maze Task for Curriculum-Based Measurement of Reading Growth , 2000 .

[14]  Lynn S. Fuchs,et al.  Must Instructionally Useful Performance Assessment Be Based in the Curriculum? , 1994 .

[15]  Joseph R. Jenkins,et al.  Examining the Validity of Two Measures for Formative Teaching: Reading Aloud and Maze , 1993 .

[16]  L. Fuchs,et al.  Identifying a Measure for Monitoring Student Reading Progress. , 1992 .

[17]  S. Deno,et al.  Curriculum-Based Measurement: The Emerging Alternative , 1985, Exceptional children.

[18]  Phyllis K. Mirkin,et al.  Identifying Valid Measures of Reading , 1982, Exceptional children.