Technical Adequacy of the easyCBM Grade 2 Reading Measures. Technical Report #1004.

In this technical report, we provide reliability and validity evidence for the easyCBM® Reading measures for grade 2 (word and passage reading fluency and multiple choice reading comprehension). Evidence for reliability includes internal consistency and item invariance. Evidence for validity includes concurrent, predictive, and construct validities for performance level scores, as well as slope of improvement. Reliability of alternate forms and content validity were analyzed previously (references to previous technical reports are provided). Internal consistency, split-half reliability, and reliability of growth slopes were moderate. For concurrent and predictive validities, multiple choice reading comprehension was a better predictor of SAT10 scores than either word or passage reading fluency. Construct validity was supported by strong model fit indices. Overall, predictive validity coefficients for all students on all measures were positive and low. Technical Adequacy of the easyCBM® Grade 2 Reading Measures Progress monitoring assessments are a key component of many school improvement efforts, including the Response to Intervention (RTI) approach to meeting students’ academic needs. In an RTI approach, teachers first administer a screening or benchmarking assessment to identify students who need supplemental interventions to meet grade‐level expectations, then use a series of progress monitoring measures to evaluate the effectiveness of the interventions they are using with the students. When students fail to show expected levels of progress (as indicated by ‘flat line scores’ or little improvement on repeated measures over time), teachers use this information to help them make instructional modifications with the goal of finding an intervention or combination of instructional approaches that will enable each student to make adequate progress toward achieving grade level proficiency and content standards. In such a system, it is critical to have reliable measures that assess the target construct and are sensitive enough to detect improvement in skill over short periods of time. Conceptual Framework: Curriculum­Based Measurement and Progress Monitoring Curriculum‐based measurement (CBM), long a bastion of special education, is gaining support among general education teachers seeking a way to monitor the progress their students are making toward achieving grade‐level proficiency in key skill and content areas. While reading in particular has received a great deal of attention in the CBM literature, a growing body of work is beginning to appear in the area of mathematics CBM. By definition, CBM is a formative assessment approach. By sampling skills related to the curricular content covered in a given year of instruction yet not specifically associated with a particular textbook, CBMs provide teachers with a snapshot of their students’ current level of proficiency in a particular content area as well as a mechanism for tracking the progress students make in gaining desired academic skills throughout the year. Historically, CBMs have been very brief individually administered measures (Deno, 2003; Good, Gruba, & Kaminski, 2002), yet they are not limited to the ‘one minute timed probes’ that many people associate them with. In one of the early definitions of curriculum‐based measurement (CBM), Deno (1987) stated that “the term curriculum‐based assessment, generally refers to any approach that uses direct observation and recording of a student’s performance in the local school curriculum as a basis for gathering information to make instructional decisions...The term curriculum‐based measurement refers to a specific set of procedures created through a research and development program ... and grew out of the Data­Based Program Modification system developed by Deno and Mirkin (1977)” (p. 41). He noted that CBM is distinct from many teacher‐made classroom assessments in two important respects: (a) the procedures reflect technically adequate measures (“they possess reliability and validity to a degree that equals or exceeds that of most achievement tests” (p. 41), and (b) “growth is described by an increasing score on a standard, or constant task. The most common application of CBM requires that a student’s performance in each curriculum area be measured on a single global task repeatedly across time” (p. 41). In the three decades since Deno and his colleagues introduced CBM, progress monitoring probes, as they have come to be called, have increased in popularity, and they are now a regular part of many schools’ educational programs (Alonzo, Tindal, & Ketterlin‐ Geller, & 2006). However, CBMs – even those widely used across the United States – often lack the psychometric properties expected of modern technically‐adequate assessments. Although the precision of instrument development has advanced tremendously in the past 30 years with the advent of more sophisticated statistical techniques for analyzing tests on an item by item basis rather than relying exclusively on comparisons of means and standard deviations to evaluate comparability of alternate forms, the world of CBMs has not always kept pace with these statistical advances. A key feature of assessments designed for progress monitoring is that alternate forms must be as equivalent as possible to allow meaningful interpretation of student performance data across time. Without such cross‐form equivalence, changes in scores from one testing session to the next are difficult to attribute to changes in student skill or knowledge. Improvements in student scores may, in fact, be an artifact of the second form of the assessment being easier than the form that was administered first. The advent of more sophisticated data analysis techniques (such as the Rasch modeling used in the development of the easyCBM® progress monitoring and benchmarking assessments) have made it possible to increase the precision with which we develop and evaluate the quality of assessment tools. In this technical report, we provide the results of a series of studies to evaluate the technical adequacy of the easyCBM® progress monitoring assessments in reading, designed for use with students in Grade 2. This assessment system was developed to be used by educators interested in monitoring the progress their students make in the area of acquiring skills in the constructs of oral reading fluency and comprehension. Additional technical reports report the results of similar studies of the easyCBM® assessments in mathematics (Anderson et al, 2010; Nese et al., 2010) and in reading with a focus on Kindergarten and first grade measures (Lai et al., 2010) and grade three through eight measures (Saéz et al., 2010). The easyCBM® Progress Monitoring Assessments The online easyCBM® progress monitoring assessment system, launched in September 2006 as part of a Model Demonstration Center on Progress Monitoring, was funded by the Office of Special Education Programs (OSEP). At the time this technical report was published, there were 111,977 teachers with easyCBM® accounts, representing schools and districts spread across every state in the country. During the 2008‐2009 school year, the system had an average of 305 new accounts registered each week, and the popularity of the system continues to grow. In the month of October 2010, alone, 11,885 new teachers registered for accounts. The online assessment system provides both universal screener assessments for fall, winter, and spring administration and multiple alternate forms of a variety of progress monitoring measures designed for use in K‐8 school settings. As part of state funding for Response to Intervention (RTI), states need technically adequate measures for monitoring progress. Given the increasing popularity of the easyCBM® online assessment system, it is imperative that a thorough analysis of the measures’ technical adequacy be conducted and the results shared with research and practitioner communities. This technical report addresses that need directly, providing the results of a series of studies examining the technical adequacy of the 2009 / 2010 version of the easyCBM® assessments in reading. Methods In this section, we describe the setting and subjects, measures, and data analysis procedures. Setting and Subjects The data were gathered during the 2009-2010 school year from 71 schools in three districts in the Pacific Northwest. All students in attendance at the schools during the assessment period participated in the testing. The second grade word reading fluency (WRF) sample ranged from 2,154-2207 students (Fall – Spring), the passage reading fluency (PRF) sample ranged from 2,205-2,236 students, and the multiple choice reading comprehension (MCRC) sample ranged from 2,144-2301 students; 205 students took the SAT-10. Approximately 49% of the sample was female. No other demographic data were available for the grade two sample. Measures Assessment data used in this study included scores from the fall, winter, and spring administrations of the easyCBM® reading measures for grade 2 and scores from the SAT-10. easyCBM® word reading fluency (WRF). Students are shown a piece of paper with a variety of decodable and sight-words arranged in a table. They are instructed to read the words aloud, moving left to right and then down the rows. Errors and skipped words are counted as incorrect while self-corrections and words read correctly are counted as correct. The student receives one point for every correct response and has 60 seconds to complete the measure. easyCBM® passage reading fluency (PRF). On the passage reading fluency measure, students are given 60 seconds to read aloud a short (approximately 250 word) narrative passage presented to them on a single side of a sheet of paper. Assessors follow along on their own test protocol, marking as errors any words skipped or read incorrectly. If a student pauses more than three seconds on a word, the assessor supp