Adaptive Item Calibration : A Process for Estimating Item Parameters Within a Computerized Adaptive Test

The characteristics of an adaptive test change the characteristics of the field testing that is necessary to add items to an existing measurement scale. The process used to add field-test items to the adaptive test might lead to scale drift or disrupt the test by administering items of inappropriate difficulty. The current study makes use of the transitivity of examinee and item in item response theory to describe a process for adaptive item calibration. In this process an item is successively administered to examinees whose ability levels match the performance of a given field-test item. By treating the item as if it were taking an adaptive test, examinees can be selected who provide the most information about the item at its momentary difficulty level. This should provide a more efficient procedure for estimating item parameters. The process is described within the context of the one-parameter logistic IRT model. The process is then simulated to identify whether it can be more accurate and efficient than random presentation of field-test items to examinees. Results indicated that adaptive item calibration might provide a viable approach to item calibration within the context of an adaptive test. It might be most useful for expanding item pools in settings with small sample sizes or needs for large numbers of items.

[1]  A Jackson Stenner,et al.  From model to measurement with dichotomous items. , 2010, Journal of applied measurement.

[2]  Peter J. Pashley,et al.  Item selection and ability estimation adaptive testing , 2010 .

[3]  Cornelis A.W. Glas,et al.  Cross-Validating Item Parameter Estimation in Adaptive Testing , 2001 .

[4]  Isaac I. Bejar,et al.  Subject Matter Experts' Assessment of Item Statistics , 1981 .

[5]  David J. Weiss,et al.  Improving Measurement Quality and Efficiency with Adaptive Testing , 1982 .

[6]  M. Berger,et al.  Optimal Calibration Designs for Tests of Polytomously Scored Items Described by Item Response Theory Models , 2001 .

[7]  Steven Buyske,et al.  Optimal Design for Item Calibration in Computerized Adaptive Testing: The 2PL Case , 1998 .

[8]  Isaac I. Bejar,et al.  A FEASIBILITY STUDY OF ON‐THE‐FLY ITEM GENERATION IN ADAPTIVE TESTING , 2002 .

[9]  H. Swaminathan,et al.  Estimation of Parameters in the Three-Parameter Latent Trait Model , 1983 .

[10]  Jae-Chun Ban,et al.  A Comparative Study of On‐line Pretest Item—Calibration/Scaling Methods in Computerized Adaptive Testing , 2001 .

[11]  Robert J. Mislevy Principal Investigator EXPLOITING AUXILIARY INFORMATION ABOUT ITEMS IN THE ESTIMATION OF RASCH ITEM DIFFICULTY PARAMETERS , 2014 .

[12]  William A. Sands,et al.  Computerized adaptive testing: From inquiry to operation. , 1997 .

[13]  Barbara S. Plake,et al.  Teachers' Ability to Estimate Item Difficulty: A Test of the Assumptions in the Angoff Standard Setting Method , 1998 .

[14]  F. Lord Applications of Item Response Theory To Practical Testing Problems , 1980 .

[15]  Anthony R. Zara,et al.  A Comparison of Procedures for Content-Sensitive Item Selection in Computerized Adaptive Tests. , 1991 .