A Comparison of Item Exposure Control Methods in Computerized Adaptive Testing

Two new methods for item exposure control were proposed. In the Progressive method, as the test progresses, the influence of a random component on item selection is reduced and the importance of item information is increasingly more prominent. In the Restricted Maximum Information method, no item is allowed to be exposed in more than a predetermined proportion of tests. Both methods were compared with six other item-selection methods (Maximum Information, One Parameter, McBride and Martin, Randomesque, Sympson and Hetter, and Random Item Selection) with regard to test precision and item exposure variables. Results showed that the Restricted method was useful to reduce maximum exposure rates and that the Progressive method reduced the number of unused items. Both did well regarding precision. Thus, a combined Progressive-Restricted method may be useful to control item exposure without a serious decrease in test precision. One of the main goals of computerized adaptive testing (CAT) is to obtain precise ability estimates with a small number of items. To achieve this goal, items are selected specifically for each examinee from a large bank. Selection is based on characteristics of examinees (their provisional estimated ability) and items (their difficulty and discrimination parameters). Thus, a different subset of items may be administered to each person (Hambleton, Swaminathan, & Rogers, 1991).

[1]  J. Mcbride,et al.  Reliability and Validity of Adaptive Ability Tests in a Military Setting , 1983 .

[2]  Mark D. Reckase,et al.  TECHNICAL GUIDELINES FOR ASSESSING COMPUTERIZED ADAPTIVE TESTS , 1984 .

[3]  R. Hambleton,et al.  Item Response Theory: Principles and Applications , 1984 .

[4]  James R. McBride Computerized Adaptive Testing. , 1985 .

[5]  R. Hambleton,et al.  Item Response Theory , 1984, The History of Educational Measurement.

[6]  Anthony R. Zara,et al.  Procedures for Selecting Items for Computerized Adaptive Tests. , 1989 .

[7]  D. D. Bickerstaff,et al.  Computerized Adaptive Testing , 1989 .

[8]  B. G. Dodd The Effect of Item Selection Procedure and Stepsize on Computerized Adaptive Attitude Measurement Using the Rating Scale Model , 1990 .

[9]  R. Hambleton,et al.  Fundamentals of Item Response Theory , 1991 .

[10]  Martha L. Stocking,et al.  A Method for Severely Constrained Item Selection in Adaptive Testing , 1992 .

[11]  Martha L. Stocking Controlling Item Exposure Rates in a Realistic Adaptive Testing Paradigm. , 1993 .

[12]  W. Alan Nicewander,et al.  Ability estimation for conventional tests , 1993 .

[13]  J. Revuelta,et al.  Adtest: A Computer-Adaptive Test Based on the Maximum Information Principle , 1994 .

[14]  Martha L. Stocking,et al.  A New Method of Controlling Item Exposure in Computerized Adaptive Testing. , 1995 .

[15]  Cynthia G. Parshall,et al.  New Algorithms for Item Selection and Exposure Control with Computerized Adaptive Testing. , 1995 .

[16]  Martha L. Stocking,et al.  PRACTICAL ISSUES IN LARGE‐SCALE HIGH‐STAKES COMPUTERIZED ADAPTIVE TESTING , 1995 .

[17]  J. Menéndez,et al.  Métodos sencillos para el control de las tasa de exposición en test adaptativos informatizados , 1996 .

[18]  Frederick E. Petry,et al.  Principles and Applications , 1997 .

[19]  Rebecca D. Hetter,et al.  Item exposure control in CAT-ASVAB. , 1997 .

[20]  J. Revuelta,et al.  An Investigation of Self-Adapted Testing in a Spanish High School Population , 1997 .

[21]  Maria T. Potenza,et al.  Flawed Items in Computerized Adaptive Testing , 1997 .

[22]  H. Wainer,et al.  Annual Meeting of the American Educational Research Association , 1998 .

[23]  Mark D. Reckase,et al.  Item Response Theory: Parameter Estimation Techniques , 1998 .

[24]  Howard Wainer,et al.  Computerized Adaptive Testing: A Primer , 2000 .