Customized Tests and Customized Norms.

With increasing interest in educational accountability, test results are now expected to meet a diverse set of informational needs. But a norm-referenced test (NRT) cannot be expected to meet the simultaneous demands for both norm-referenced and curriculum-specific information. One possible solution, which is the focus of this article, is to customize the NRT. Customized tests may appear in any form. They may (a) add a few curriculum-specific items to the end of the NRT, (b) substitute locally constructed items for a few NRT items, (c) substitute a curriculum-specific test (CST) for the NRT, or (d) use equating methods to obtain predicted NRT scores from the CST scores. In this article, we describe the four main approaches to customized testing, address the validity of the uses and interpretations of customized test scores obtained from the four main approaches, and offer recommendations regarding the use of customized tests and the need for further research. Results indicate that customized testing can y...

[1]  Neil J. Dorans,et al.  Implications for Altering the Context in Which Test Items Appear: A Historical Perspective on an Immediate Concern , 1985 .

[2]  R. Hambleton,et al.  Item Response Theory , 1984, The History of Educational Measurement.

[3]  Howard Wainer,et al.  Item Clusters and Computerized Adaptive Testing: A Case for Testlets , 1987 .

[4]  Rebecca Zwick,et al.  The Effect of Changes in the National Assessment: Disentangling the NAEP 1985-86 Reading Anomaly. Revised. , 1990 .

[5]  S. Jean Jolly,et al.  Customizing a Norm‐Referenced Achievement Test to Achieve Curricular Validity: A Case Study , 1984 .

[6]  Edward Haertel Report of the NAEP Technical Review Panel on the 1986 Reading Anomaly, the Accuracy of NAEP Trends, and Issues Raised by State-Level NAEP Comparisons. Technical Report. , 1989 .

[7]  George F. Madaus,et al.  Public Policy and the Testing Profession--You've Never Had It So Good?. , 1985 .

[8]  Donald Ross Green,et al.  Valid Normative Information From Customized Achievement Tests , 1987 .

[9]  John J. Cannell,et al.  Nationally Normed Elementary Achievement Testing in America's Public Schools: How All 50 States Are Above the National Average , 1988 .

[10]  Robert L. Linn,et al.  Comparing State and District Test Results to National Norms: The Validity of Claims That “Everyone Is Above Average” , 1990 .

[11]  Robert L. Linn,et al.  Accountability: The Comparison of Educational Systems and the Quality of Test Results , 1987 .

[12]  Michael D. Hiscox,et al.  Using Standardized Tests for Assessing Local Learning Objectives , 1984 .

[13]  Wendy M. Yen,et al.  The Extent, Causes and Importance of Context Effects on Item Parameters for Two Latent Trait Models. , 1980 .

[14]  Lorrie A. Shepard,et al.  Inflated Test Score Gains: Is the Problem Old Norms or Teaching the Test? , 1990 .

[15]  Walter D. Way IRT Ability Estimates from Customized Achievement Tests Without Representative Content Sampling , 1989 .

[16]  Nancy L. Allen,et al.  The Effect of Deleting Content-Related Items on IRT Ability Estimates , 1987 .

[17]  Ronald A. Berk,et al.  A Guide to Criterion-Referenced Test Construction , 1984 .

[18]  R. Hambleton,et al.  Item Response Theory: Principles and Applications , 1984 .

[19]  Roland H. Good,et al.  Curriculum Bias in Published, Norm-Referenced Reading Tests: Demonstrable Effects. , 1988 .

[20]  Robert L. Linn,et al.  Has Item Response Theory Increased the Validity of Achievement Test Scores , 1990 .

[21]  R. Hambleton Principles and selected applications of item response theory. , 1989 .