The Microsoft Certification Program (MCP) includes many new computer-based item types, based on complex cases involving the Windows 2000 (registered) operating system. This Innovative Item Technology (IIT) has presented challenges beyond traditional psychometric considerations such as capturing and storing the relevant response data from examinees, codifying it, and scoring it. This paper presents an integrated, empirically based data-systems approach to processing complex scoring rules and examinee response data for IIT cases and items, highlighting data management considerations, use of various psychometric analyses to improve the coding and scoring rules, and some of the challenges of scaling the data, using a partial credit item response theory model. Empirical examples from the Microsoft Innovative Item Types project are used to illustrate practical problems and solutions. (Author/SLD) Reproductions supplied by EDRS are the best that can be made from the original document. 1 PERMISSION TO REPRODUCE AND DISSEMINATE THIS MATERIAL HAS BEEN GRANTED BY TO THE EDUCATIONAL RESOURCES INFORMATION CENTER (ERIC) U.S. DEPARTMENT OF EDUCATION Office of Educational Research and Improvement jUCATIONAL RESOURCES INFORMATION CENTER (ERIC) This document has been reproduced as received from the person or organization originating it. 0 Minor changes have been made to improve reproduction quality. Points of view or opinions stated in this document do not necessarily represent official OERI position or policy. Capturing, Codifying and Scoring Complex Data for Innovative, Computer-based Items1 Richard M. Luecht University of North Carolina at Greensboro
[1]
Isaac I. Bejar,et al.
A methodology for scoring open-ended architectural design problems.
,
1991
.
[2]
Randy Elliot Bennett,et al.
VALIDITY AND AUTOMATED SCORING: IT'S NOT ONLY THE SCORING
,
1997
.
[3]
David B. Pisoni,et al.
Two Experiments on Automatic Scoring of Spoken Language Proficiency
,
2000
.
[4]
Stephen G. Clyman,et al.
Development of Automated Scoring Algorithms for Complex Performance Assessments: A Comparison of Two Approaches.
,
1997
.
[5]
Randy Elliot Bennett,et al.
Validity and Automad Scoring: It's Not Only the Scoring
,
1998
.