On the Roles of Task Model Variables in Assessment Design.

Tasks are the most visible element in an educational assessment. Their purpose, however, is to provide evidence about targets of inference that cannot be directly seen at all: what examinees know and can do, more broadly conceived than can be observed in the context of any particular set of tasks. This paper concerns issues in assessment design that must be addressed for assessment tasks to serve this purpose effectively and efficiently. The first part of the paper describes a conceptual framework for assessment design, which includes models for tasks. Corresponding models appear for other aspects of an assessment, in the form of a student model, evidence models, an assembly model, a simulator/presentation model, and an interface/environment model. Coherent design requires that these models be coordinated to serve the assessment’s purpose. The second part of the paper focuses attention on the task model. It discusses the several roles tha t task model variables play to achieve the needed coordination in the design phase of an assessment, and to structure task creation and inference in the operational phase. 1 This paper was presented at the conference “Generating items for cognitive tests: Theory and practice,” co-sponsored by Educational Testing Service and the United States Air Force Laboratory and held at the Henry Chauncey Conference Center, Educational Testing Service, Princeton, NJ, November 5-6, 1998.

[1]  Patrick Tapsfield,et al.  The British Army Recruit Battery Goes Operational: From Theory to Practice in Computer‐Based Testing Using Item‐Generation Techniques , 1995 .

[2]  S. Messick The Interplay of Evidence and Consequences in the Validation of Performance Assessments , 1994 .

[3]  Willem J. van der Linden,et al.  Multidimensional Adaptive Testing with a Minimum Error-Variance Criterion , 1999 .

[4]  Allen Newell,et al.  Human Problem Solving. , 1973 .

[5]  Martijn P. F. Berger,et al.  A Review of Selection Methods for Optimal Test Design. Research Report 94-4. , 1994 .

[6]  Kathleen M. Sheehan,et al.  ITEMS BY DESIGN: THE IMPACT OF SYSTEMATIC FEATURE VARIATION ON ITEM STATISTICAL CHARACTERISTICS , 1999 .

[7]  S. Embretson A cognitive design system approach to generating valid tests : Application to abstract reasoning , 1998 .

[8]  G. H. Fischer,et al.  The linear logistic test model as an instrument in educational research , 1973 .

[9]  Issac I. Bejar A Generative Analysis of a Three-Dimensional Spatial Task , 1990 .

[10]  John B. Carroll,et al.  PSYCHOMETRIC TESTS AS COGNITIVE TASKS: A NEW "STRUCTURE OF INTELLECT" , 1974 .

[11]  David A. Schum,et al.  Evidence and inference for the intelligence analyst , 1987 .

[12]  George B. Macready,et al.  The application of latent class models in adaptive testing , 1992 .

[13]  Wells HivelyII,et al.  A “UNIVERSE‐DEFINED” SYSTEM OF ARITHMETIC ACHIEVEMENT TESTS1 , 1968 .

[14]  R. Chaffin,et al.  A TAXONOMY OF SEMANTIC RELATIONS FOR THE CLASSIFICATION OF GRE ANALOGY ITEMS AND AN ALOGORITHM FOR THE GENERATION OF GRE‐TYPE ANALOGIES , 1987 .

[15]  Robert J. Mislevy,et al.  How to Equate Tests With Little or No Data , 1993 .

[16]  Linda S. Steinberg,et al.  Intelligent tutoring and assessment built on an understanding of a technical problem-solving task , 1996 .

[17]  Donald E. Powers,et al.  The Relationship of Content Characteristics of GRE Analytical Reasoning Items to Their Difficulties and Discriminations , 1989 .

[18]  D. Schum The Evidential Foundations of Probabilistic Reasoning , 1994 .

[19]  Susan E. Embretson,et al.  EFFECTS OF PROSE COMPLEXITY ON ACHIEVEMENT TEST ITEM DIFFICULTY , 1991 .

[20]  Martha L. Stocking,et al.  A Method for Severely Constrained Item Selection in Adaptive Testing , 1992 .

[21]  Daniel O. Segall,et al.  Multidimensional adaptive testing , 1996 .

[22]  Karen Draney,et al.  Objective measurement : theory into practice , 1992 .