Developing Computerized Versions of Paper-and-Pencil Tests: Mode Effects for Passage-Based Tests

As testing moves from paper-and-pencil administration toward computerized administration, how to present tests on a computer screen becomes an important concern. Of particular concern are tests that contain necessary information that cannot be displayed on screen all at once for an item. Ideally, the method of presentation should not interfere with examinee performance on the test. Examinees should perform similarly on an item regardless of the mode of administration. This paper discusses the development of a computer interface for passage-based, multiple-choice tests. Findings are presented from two studies that compared performance across computer and paper administrations of several fixed-form tests. The effect of computer interface changes made between the two studies is discussed. The results of both studies showed some performance differences across modes. Evaluations of individual items suggested a variety of factors that could have contributed to mode effects. Although the observed mode effects were in general small, overall the findings suggest that it would be beneficial to develop an understanding of factors that can influence examinee behavior and to design a computer interface accordingly, to ensure that examinees are responding to test content rather than features inherent in presenting the test on computer.

[1]  Michael Russell,et al.  Testing Writing on Computers: An Experiment Comparing Student Performance on Tests Conducted via Computer and via Paper-and-Pencil , 1997 .

[2]  Mary Pommerich,et al.  From Simulation to Application: Examinees React to Computerized Testing , 2000 .

[3]  Paul Muter,et al.  Interface design and optimization of reading of continuous text , 1996 .

[4]  D L Fisher,et al.  Visual Displays: The Highlighting Paradox , 1989, Human factors.

[5]  Tianyou Wang,et al.  Evaluating Comparability in Computerized Adaptive Testing: Issues, Criteria and an Example , 2001 .

[6]  John Mazzeo Comparability of Computer and Paper-and-Pencil Scores for Two CLEP General Examinations. College Board Report No. 91-5. , 1991 .

[7]  Betty A. Bergstrom,et al.  Ability Measure Equivalence of Computer Adaptive and Pencil and Paper Tests: A Research Synthesis. , 1992 .

[8]  Mark D. Reckase,et al.  Effect of the Medium of Item Presentation on Examinee Performance and Item Characteristics , 1989 .

[9]  Michael Russell,et al.  Testing On Computers , 1999 .

[10]  Craig N. Mills,et al.  FIELD TEST OF A COMPUTER-BASED GRE GENERAL TEST , 1993 .

[11]  Cynthia G. Parshall,et al.  Practical Considerations in Computer-Based Testing , 2002 .

[12]  Brent Bridgeman,et al.  Effects of Screen Size, Screen Resolution, and Display Rate on Computer-Based Test Performance , 2001 .

[13]  Yasuyo Sawaki,et al.  Comparability of Conventional and Computerized Tests of Reading in a Second Language , 2001 .

[14]  P A Kolers,et al.  Readability of Text Scrolled on Visual Display Terminals as a Function of Window Size , 1983, Human factors.

[15]  Paul Muter,et al.  Reading and skimming from computer screens and books: the paperless office revisited? , 1991 .

[16]  Ion P. Beldie,et al.  RESEARCH NOTE A Comparison of Paging and Scrolling for Changing Screen Contents by Inexperienced Users , 1983 .

[17]  Rebecca D. Hetter,et al.  Evaluating item calibration medium in computerized adaptive testing. , 1997 .

[18]  Andrew Dillon,et al.  Reading from paper versus screens: a critical review of the empirical literature , 1992 .