Washington Assessment of Student Learning

WASL was the standardized educational assessment test administered by the state of Washington from 1997 to 200 from 1997 to 2009. It tested the students in Washington’s public schools in reading, writing, mathematics and science. Students in third through eighth grade took the WASL reading and mathematics sections. In addition, fifth and eighth graders were required to take the science section, and fourth and seventh graders also take the writing section. Tenth graders were tested in all four sections. For the Class of 2008 and 2009, the WASL was a graduation requirement. Private-school and home-schooled students were exempt from the WASL A student’s performance on the reading, math and science sections is reported using “scale scores”. Scale scores are three-digit numbers that are used to place the student into one of four levels: Advanced (Level 4), Proficient (Level 3), Basic (Level 2) and Below Basic (Level 1). A scale score of 400 is assigned to a student who has just barely met the state standard; this score is at the lower end of Level 3. Students scoring in Level 4 are said to have exceeded the state standard. Students with scores in Level 1 or Level 2 have not met standard. Students generally have to achieve a score that represents approximately 60 to 65 percent of the points possible on each test to pass. That score or above means they have met the required standard for proficiency in that particular subject.

[1]  D. Borsboom Educational Measurement (4th ed.) , 2009 .

[2]  Evelyn S. Johnson,et al.  Validating an Alternate Assessment , 2004 .

[3]  Samuel Messick,et al.  Validity and washback in language testing , 1996 .

[4]  Charles Lewis,et al.  Estimating the Consistency and Accuracy of Classifications Based on Test Scores , 1993 .

[5]  R. Shavelson,et al.  Sampling Variability of Performance Assessments. , 1993 .

[6]  P. Holland,et al.  DIF DETECTION AND DESCRIPTION: MANTEL‐HAENSZEL AND STANDARDIZATION1,2 , 1992 .

[7]  N. Dorans,et al.  CONSTRUCTED RESPONSE AND DIFFERENTIAL ITEM FUNCTIONING: A PRAGMATIC APPROACH1 , 1991 .

[8]  P. Bentler,et al.  Comparative fit indexes in structural models. , 1990, Psychological bulletin.

[9]  D. Cole Utility of confirmatory factor analysis in test validation research. , 1987, Journal of consulting and clinical psychology.

[10]  Neil J. Dorans,et al.  Demonstrating the utility of the standardization approach to assessing unexpected differential item performance on the Scholastic Aptitude Test. , 1986 .

[11]  A. Boomsma On the robustness of LISREL (maximum likelihood estimation) against small sample size and non-normality. , 1984 .

[12]  Neil J. Dorans,et al.  ASSESSING UNEXPECTED DIFFERENTIAL ITEM PERFORMANCE OF FEMALE CANDIDATES ON SAT AND TSWE FORMS ADMINISTERED IN DECEMBER 1977: AN APPLICATION OF THE STANDARDIZATION APPROACH1 , 1983 .

[13]  G. Masters A rasch model for partial credit scoring , 1982 .

[14]  P. Bentler,et al.  Significance Tests and Goodness of Fit in the Analysis of Covariance Structures , 1980 .

[15]  Donald B. Rubin,et al.  The Dependability of Behavioral Measurements: Theory of Generalizability for Scores and Profiles. , 1974 .

[16]  Nathan Mantel,et al.  Chi-square tests with one degree of freedom , 1963 .

[17]  W. Haenszel,et al.  Statistical aspects of the analysis of data from retrospective studies of disease. , 1959, Journal of the National Cancer Institute.

[18]  Alija Kulenović,et al.  Standards for Educational and Psychological Testing , 1999 .

[19]  H. Wainer,et al.  Differential Item Functioning. , 1994 .

[20]  R. Linn Educational measurement, 3rd ed. , 1989 .

[21]  Peter M. Bentler,et al.  Causal Modeling via Structural Equation Systems , 1988 .

[22]  Lisa L. Harlow,et al.  Behavior of some elliptical theory estimators with nonnormal data in a covariance structures framework: A Monte Carlo study. , 1986 .

[23]  Rick H. Hoyle,et al.  Confirmatory Factor Analysis , 1983 .