Alternative Modes of Assessment, Uniform Standards of Validity. Research Report.

In contrast to multiple choice, alternative modes of assessment afford varying degrees of openness in the allowable responses. Prominent among the alternatives is the assessment of performance, sometimes in its own right where the issue is the quality of the particular performance per se, but more often as a vehicle for the assessment of knowledge, skill, or other attributes. Because inferences about score meaning in construct terms and about the action implications of that meaning are fundamentally similar in the alternative assessment modes (despite surface differences), the same standards of validity apply to all educational and psychological measurement. These standards are addressed in terms of content, substantive, structural, generalizability, external, and consequential aspects of construct validity. (Contains 42 references.) (Author) --:-----*********.i.-A**::****************1:******************************** Reproductions supplied by EDRS are the best that can be made from the original document. **wwwwwwmwwwwwwwlqwwwwwwwmmwwwig***************************************** R E S E A R C H R E O R T U 3 DEPARTMENT OF EDUCATION ()eke or Educahoner Research and Improvement EDUC IONAL RESOURCES INFORMATION CENTER (ERIC) his oocumenr has been reproduced as received Iron, the person or orgiinitahon onginahng Minor changes have been made to improve reproduction Quality e Points of viev, or Opinions stated in MS dor u ment do not necessaray represent otririai OE RI positron or policy PERMISSION 10 RE E'ROL/U(..E THIS MAli RIA1 HAS BE F N GRANTED BY c, L y ' 1)tti AlioNAt 501'RCES Ii'.it.MA'i.,5 t NIE IF RICI ALTERNATIVE MODES OF ASSESSMENT, UNIFORM STANDARDS OF VALIDITY

[1]  R. Stiggins Facing Challenges of a New Era of Educational Assessment , 1991 .

[2]  Marc M. Sebrechts,et al.  AGREEMENT BETWEEN EXPERT-SYSTEM AND HUMAN RATERS' SCORES ON COMPLEX CONSTRUCTED-RESPONSE QUANTITATIVE ITEMS , 1991 .

[3]  Richard E. Snow Construct validity and constructed-response tests. , 1993 .

[4]  Michael T. Kane,et al.  An argument-based approach to validity. , 1992 .

[5]  Harold F. O'Neil,et al.  Policy and validity prospects for performance-based assessment. , 1993 .

[6]  L. Shepard Chapter 9: Evaluating Test Validity , 1993 .

[7]  D. Campbell,et al.  Convergent and discriminant validation by the multitrait-multimethod matrix. , 1959, Psychological bulletin.

[8]  Lee S. Shulman,et al.  Reconstruction of Educational Research , 1966 .

[9]  Lee J. Cronbach,et al.  Construct validation after thirty years. , 1989 .

[10]  S. Messick The Interplay of Evidence and Consequences in the Validation of Performance Assessments , 1994 .

[11]  N. Frederiksen The real test bias: Influences of testing on teaching and learning. , 1984 .

[12]  R. Ebel 8. Achievement Test Items:Current issues , 1984 .

[13]  Roger T. Lennon,et al.  Assumptions Underlying the Use of Content Validity , 1956 .

[14]  Isaac I. Bejar,et al.  A methodology for scoring open-ended architectural design problems. , 1991 .

[15]  Samuel Messick Trait equivalence as construct validity of score interpretation across multiple methods of measurement. , 1993 .

[16]  J. Frederiksen,et al.  A Systems Approach to Educational Testing , 1989 .

[17]  P. Moss Shifting Conceptions of Validity in Educational Measurement: Implications for Performance Assessment , 1992 .

[18]  Gregg B. Jackson,et al.  Meta-Analysis: Cumulating Research Findings Across Studies , 1982 .

[19]  Stephen B. Dunbar,et al.  Complex, Performance-Based Assessment: Expectations and Validation Criteria , 1991 .

[20]  G. A. Ferguson,et al.  On transfer and the abilities of man. , 1956, Canadian journal of psychology.