ELECTRONIC VERSUS TRADITIONAL STUDENT RATINGS OF INSTRUCTION

At a large university, ratings of faculty infive academic areas were collected from two groups ofstudents using paper-and-pencil and electronic surveyadministration modes. Factor analyses performed on both sets of data showed that the two modesyielded similar factor patterns. A 2 5 MANOVA indicatedthat ratings were significantly influenced by academicarea (p < .001) but not by survey method. A high percentage of students in both groups feltconfident that their ratings were anonymous, thoughanonymity ratings were significantly higher (p <.001) in the paperand-pencil group. Students'satisfaction with the mode of administration wassignificantly higher (p < .01) for the electronicgroup than for the paper-and-pencil group. Overall,results suggest that the electronic survey mode is aviable alternative to the paper-and-pencil mode ofadministration.

[1]  Sara Kiesler,et al.  Social psychological aspects of computer-mediated communication , 1984 .

[2]  Jerry M. Lewis,et al.  Students' Evaluation of Teaching and Accountability: Implications from the Boyer and the ASA Reports. , 1994 .

[3]  Frank Costin,et al.  A Graduate Course in the Teaching of Psychology: Description and Evaluation , 1968 .

[4]  T. Heberlein,et al.  Factors affecting response rates to mailed questionnaires: A quantitative analysis of the published literature. , 1978 .

[5]  K. Feldman The significance of circumstances for college students' ratings of their teachers and courses , 1979 .

[6]  Paul Rosenfeld,et al.  Impression management, social desirability, and computer administration of attitude questionnaires: Does the computer make a difference? , 1992 .

[7]  Assessment activities at large, research universities , 1989 .

[8]  Christopher L. Martin,et al.  Some effects of computerized interviewing on job applicant responses , 1989 .

[9]  Chris Schmandt Voice communication with computers: conversational systems , 1994 .

[10]  John C. Ory,et al.  Evaluating Teaching Effectiveness: A Practical Guide , 1984 .

[11]  P. Abrami,et al.  Students' Evaluations of University Teaching: Research Findings, Methodological Issues, and Directions for Future Research , 1987 .

[12]  H. Harman Modern factor analysis , 1961 .

[13]  Lee Sproull,et al.  Connections: New Ways of Working in the Networked Organization , 1991 .

[14]  John C. Ory Student ratings of instruction: Ethics and practice , 1990 .

[15]  Gordon E. Pelton Voice processing , 1993 .

[16]  S. Kiesler,et al.  Response Effects in the Electronic Survey , 1986 .

[17]  John A. Centra,et al.  Determining faculty effectiveness , 1980 .

[18]  V. Patten,et al.  The Politics of Assessment of the Professoriate. , 1994 .

[19]  A. S. Barr The Measurement and Prediction of Teaching Efficiency , 1943 .

[20]  S. Booth-Kewley,et al.  Computer-Administered Surveys in Organizational Settings , 1993 .

[21]  J. E. Jackson,et al.  Factor analysis, an applied approach , 1983 .

[22]  L. White,et al.  Evaluating faculty for promotion and tenure , 1989 .