Paper-based versus computer-based assessment: key factors associated with the test mode effect

This investigation seeks to confirm several key factors in computer-based versus paper-based assessment. Based on earlier research, the factors considered here include content familiarity, computer familiarity, competitiveness, and gender. Following classroom instruction, freshman business undergraduates (N = 105) were randomly assigned to either a computer-based or identical paper-based test. ANOVA of test data showed that the computer-based test group outperformed the paper-based test group. Gender, competitiveness, and computer familiarity were NOT related to this performance difference, though content familiarity was. Higher-attaining students benefited most from computer-based assessment relative to higher-attaining students under paper-based testing. With the current increase in computer-based assessment, instructors and institutions must be aware of and plan for possible test mode effects.

[1]  R. Clark Media will never influence learning , 1994 .

[2]  Daniel J. Bernstein,et al.  An Examination of the Equivalence between Non-Adaptive Computer-Based and Traditional Testing , 2001 .

[3]  Christina Haas,et al.  'What did I just say?’ Reading problems in writing with the machine , 1986 .

[4]  Roy B. Clariana Considering learning style in computer-assisted learning , 1997, Br. J. Educ. Technol..

[5]  Cynthia G. Parshall,et al.  Computer Testing versus Paper-and-Pencil Testing: An Analysis of Examinee Characteristics Associated with Mode Effect. , 1993 .

[6]  J. B. Olsen,et al.  THE FOUR GENERATIONS OF COMPUTERIZED EDUCATIONAL MEASUREMENT , 1988 .

[7]  Alexander Russo Mixing Technology and Testing. , 2002 .

[8]  John D. Bransford,et al.  Levels of processing versus transfer appropriate processing , 1977 .

[9]  Rebecca Zwick,et al.  The Effect of Changes in the National Assessment: Disentangling the NAEP 1985-86 Reading Anomaly. Revised. , 1990 .

[10]  Pat-Anthony Federico Computer-Based and Paper-Based Measurement of Recognition Performance. , 1989 .

[11]  Alan C. Bugbee,et al.  The Equivalence of Paper-and-Pencil and Computer-Based Testing. , 1996 .

[12]  R R Mourant,et al.  Visual Fatigue and Cathode Ray Tube Display Terminals , 1981, Human factors.

[13]  John Mazzeo Comparability of Computer and Paper-and-Pencil Scores for Two CLEP General Examinations. College Board Report No. 91-5. , 1991 .

[14]  Barbara Watson Key factors affecting conceptual gains from CAL materials , 2001, Br. J. Educ. Technol..

[15]  F. Drasgow,et al.  Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. , 1993 .

[16]  Gregory J. Cizek,et al.  The Effect of Altering the Position of Options in a Multiple-Choice Examination , 1994 .

[17]  Helen Raptis,et al.  Gender Differences in Introductory Atmospheric and Oceanic Science Exams: Multiple Choice Versus Constructed Response Questions , 2001 .

[18]  Michael Russell,et al.  Testing On Computers , 1999 .

[19]  S DeAngelis,et al.  Equivalency of computer-based and paper-and-pencil testing. , 2000, Journal of allied health.

[20]  Roy B. Clariana,et al.  Achievement Predictors for a Computer Applications Module Delivered Online , 2000, J. Inf. Syst. Educ..