The Past, Present, and Future of Curriculum-Based Measurement Research

Thirty years ago, the dominant approach to progress monitoring was mastery measurement. With mastery measurement, teachers specify a hierarchy of instructional objectives constituting the annual curriculum and, for each objective in the sequence, devise a criterion-referenced test to assess mastery. When a student achieves the mastery criterion for an objective, the teacher simultaneously shifts instruction and assessment to the next skill in the hierarchy. In this way, learning is conceptualized as a series of short-term accomplishments, which are believed to accumulate into broad competence. This notion of progress monitoring was represented in popular methods such as the Wisconsin Instructional Design System (see www.wids.org) and Precision Teaching (e.g., www.celeration.org). At about that same time, Stan Deno at the University of Minnesota, with a handful of doctoral students (including Doug Marston, Steve Robinson, Mark Shinn, Jerry Tindal, Caren Wesson, and me), launched a systematic program of research on the technical features, logistical challenges, and instructional effectiveness of progress monitoring. The initial focus of that research program was mastery measurement, but several technical difficulties associated with mastery measurement quickly emerged. For example, to assess mastery of a specific skill, each mastery measurement criterion-referenced test addresses a single skill. Such testing is potentially misleading, however, because many low achievers can read consonant-vowel-consonant words if they know that all words on the page conform to the pattern; similarly, they can solve addition with regrouping problems if they know that all problems on the page fit that problem type. By contrast, when a test mixes words with different phonetic patterns or mixes math problems of different types (as occurs on high-stakes tests and in the real world), these same students no longer perform the "mastered" skill competently. This questions mastery measurement's assumption that a series of short-term accomplishments accumulates into broad-based competence; it compromises the relation between number of objectives mastered during the year and end-of-year performance on more global assessments; and it can lull educators into a false sense that their students are making progress. The CBM Alternative To address this and other important problems associated with mastery measurement (for a full discussion, see Fuchs & Deno, 1991), Deno (1985) conceptualized an alternative approach for the purpose of progress monitoring: curriculum-based measurement (CBM). Each weekly CBM is an alternate form, representing the performance desired at the end of the year. In this way, CBM circumvents mastery measurement's technical difficulties by requiring students to simultaneously integrate the various skills required for competent yearend performance on every weekly test. As students learn the necessary components of the annual curriculum, their CBM score gradually increases. Also, because each weekly test is comparable in difficulty and conceptualization, slope can be used to quantify rate of learning. Slope can also be used to gauge a student's responsiveness to the instructional program and as a signal to revise the student's program when inadequate responsiveness is revealed. A key challenge in the development of CBM is to identify measurement tasks that simultaneously integrate the various skills required for competent year-end performance. Two approaches have been used. One involves identifying a task that correlates robustly (and better than potentially competing tasks) with the various component skills constituting the academic domain. For example, Deno, Mirkin, and Chiang (1982) first identified passage reading fluency (often termed "oral reading fluency") as a key CBM task by showing how its correlations with valued criterion measures exceeded correlations for other potential CBM tasks. Conceptually, it makes sense that passage reading fluency is a robust indicator of overall reading competence. …

[1]  Lynn S. Fuchs,et al.  Paradigmatic Distinctions between Instructionally Relevant Measurement Models , 1991 .

[2]  S. Deno,et al.  Curriculum-Based Measurement: The Emerging Alternative , 1985, Exceptional children.

[3]  Michael D. Hixson,et al.  Using Curriculum-Based Measurement to Predict Performance on State Assessments in Reading , 2004 .

[4]  Lynn S. Fuchs,et al.  Effects of Curriculum-Based Measurement and Consultation on Teacher Planning and Student Achievement in Mathematics Operations , 1991 .

[5]  Theodore J. Christ,et al.  An Examination of Variability as a Function of Passage Variance in CBM Progress Monitoring , 2004 .

[6]  Lynn S. Fuchs,et al.  The Contribution of Skills Analysis to Curriculum-Based Measurement in Spelling , 1991 .

[7]  L. Fuchs,et al.  Enhancing Students' Helping Behavior during Peer-Mediated Instruction with Conceptual Mathematical Explanations , 1997, The Elementary School Journal.

[8]  Lynn S. Fuchs,et al.  Formative Evaluation of Academic Progress: How Much Growth Can We Expect?. , 1993 .

[9]  Lynn S. Fuchs,et al.  The Effects of Frequent Curriculum-Based Measurement and Evaluation on Pedagogy, Student Achievement, and Student Awareness of Learning , 1984 .

[10]  Lynn S. Fuchs,et al.  Using CBM as an Indicator of Decoding, Word Reading, and Comprehension: Do the Relations Change With Grade? , 2005 .

[11]  T. L. Eckert,et al.  Has Curriculum-Based Assessment Become a Staple of School Psychology Practice? An Update and Extension of Knowledge, Use, and Attitudes From 1990 to 2000 , 2004 .

[12]  Lynn S. Fuchs,et al.  The role of skills analysis in curriculum-based measurement in math. , 1990 .

[13]  Lynn S. Fuchs,et al.  Strengthening the Connection between Assessment and Instructional Planning with Expert Systems. , 1994 .

[14]  Mark R. Shinn,et al.  A Preliminary Investigation Into the Identification and Development of Early Mathematics Curriculum-Based Measurement , 2004 .

[15]  Michelle K. Hosp,et al.  Oral Reading Fluency as an Indicator of Reading Competence: A Theoretical, Empirical, and Historical Analysis , 2001 .

[16]  Shannon M. Suldo,et al.  Examining the Incremental Benefits of Administering a Maze and Three Versus One Curriculum-Based Measurement Reading Probes When Conducting Universal Screening , 2004 .