This paper describes a four-process model for the operation of a generic assessment: ACTIVITY SELECTION, PRESENTATION, RESPONSE PROCESSING (EVIDENCE IDENTIFICATION), and SUMMARY SCORING (EVIDENCE ACCUMULATION). It discusses the relationships between the functions and responsibilities of these processes and the objects in the IMS Question and Test Interoperability (QTI) information model. The ideas are illustrated with hypothetical examples that concern elementary Chinese language proficiency. The complementary modular structures of the design framework and the operational processes encourage efficiency in design and the reuse of objects and processes. Infrastructure is usually thought to be dull. Tedious. Few people wish to think about it until it is necessary, which is then often too late. Once established, it is expensive and often difficult to change. Moreover, infrastructures require standardization; they’re too expensive and restrictive to allow multiple infrastructures to coexist, too important to society to allow the monetary interests of one company or industry to determine the underlying infrastructure for everyone. Probably the most important lesson for the development of information appliance industry is the importance of establishing an open, universal standard for exchanging information. If only we can establish world-wide standards for the sharing of information, then the particular infrastructure used within each appliance becomes irrelevant. Each appliance can use whatever best fits its needs. Each company can select whatever infrastructure makes most sense to its operations. Once the information exchange is standardized, nothing else matters. Donald Norman (1998, pp. 132-133) The Instructional Management Systems (IMS) project attempts to bring together suppliers of educational material and processes for a variety of purposes and stages in the life of a learner. The challenge facing IMS is to create a framework that supports the delivery of operational assessments fulfilling this range of purposes. This is a tall order. The requirements for a college entrance exam seem quite different from those of assessment to support learning embedded in an Intelligent Tutoring System or from a large-scale survey of educational achievement. The IMS standard for interoperability among assessment delivery and authoring systems must support both the standard multiple-choice and essay-type items,
[1]
Russell G. Almond,et al.
On Test Selection Strategies for Belief Networks
,
1995,
AISTATS.
[2]
R. Hambleton.
Principles and selected applications of item response theory.
,
1989
.
[3]
Russell G. Almond,et al.
Graphical Models and Computerized Adaptive Testing
,
1999
.
[4]
Russell G. Almond,et al.
On the Roles of Task Model Variables in Assessment Design.
,
1999
.
[5]
R. Mislevy.
Evidence and inference in educational assessment
,
1994
.
[6]
Howard Wainer,et al.
Computerized Adaptive Testing: A Primer
,
2000
.
[7]
Russell G. Almond,et al.
Bayes Nets in Educational Assessment: Where Do the Numbers Come from? CSE Technical Report.
,
2000
.
[8]
Robert J. Mislevy,et al.
The Role of Probability-Based Inference in an Intelligent Tutoring System.
,
1995
.
[9]
Martijn P. F. Berger,et al.
A Review of Selection Methods for Optimal Test Design. Research Report 94-4.
,
1994
.
[10]
Raymond J. Adams,et al.
The Multidimensional Random Coefficients Multinomial Logit Model
,
1997
.
[11]
Donald A. Norman,et al.
The invisible computer
,
1998
.