Comparability of Data Gathered from Evaluation Questionnaires on Paper and Through the Internet

Collecting feedback from students through course, program and other evaluation questionnaires has become a costly and time consuming process for most colleges. Converting to data collection through the internet, rather than completion on paper, can result in a cheaper and more efficient process. This article examines several research questions which need to be answered to establish that results collected by the two modes of administration are equivalent. Data were gathered for a program evaluation questionnaire from undergraduate students at a university in Hong Kong. Students were able to choose between completion on paper or through the internet. In six of the seven Faculties the number of responses through each mode was roughly the same. Students in the Engineering Faculty favored the internet. Scores on the 14 out of 18 scales in the instrument showed small differences by mode of response, which became smaller still with controls for pertinent demographic variables. The main response question addressed in the study was whether there was any difference in the way respondents to the two modes interpreted the questions. The study demonstrated the equivalence of the two data sets by showing that both could be fitted to a common model with structural equation modeling (SEM). Five levels of tests of invariance further confirmed the comparability of data by mode of administration. This study, therefore suggests that changing to internet collection for course and program evaluations will not affect the comparability of ratings.

[1]  S J Henly,et al.  Robustness of some estimators for the analysis of covariance structures. , 1993, The British journal of mathematical and statistical psychology.

[2]  Jaeyool Boo,et al.  Computerized and Paper-and-Pencil Versions of the Rosenberg Self-Esteem Scale: A Comparison of Psychometric Features and Respondent Preferences , 2001 .

[3]  Brian C. Cronk,et al.  Personality research on the Internet: A comparison of Web-based and traditional instruments in take-home and in-class settings , 2002, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.

[4]  David J. Woodruff,et al.  Statistical Inference for Coefficient Alpha , 1987 .

[5]  George D. Kuh Assessing What Really Matters to Student Learning Inside The National Survey of Student Engagement , 2001 .

[6]  Gi Woong Yun,et al.  Comparative Response to a Survey Executed by Post, E-mail, & Web Form , 2006, J. Comput. Mediat. Commun..

[7]  Y Kano,et al.  Can test statistics in covariance structure analysis be trusted? , 1992, Psychological bulletin.

[8]  David L. Rados,et al.  Effects of Foot-in-the-Door, Cash Incentives, and Followups on Survey Response , 1981 .

[9]  J. Steenkamp,et al.  Assessing Measurement Invariance in Cross-National Consumer Research , 1998 .

[10]  David J. Woodruff,et al.  Tests for equality of several alpha coefficients when their sample estimates are dependent , 1986 .

[11]  F. Vijver,et al.  The incomplete equivalence of the paper-and-pencil and computerized versions of the General Aptitude Test Battery , 1994 .

[12]  M. Couper A REVIEW OF ISSUES AND APPROACHES , 2000 .

[13]  Shannon K. Gilmartin,et al.  Assessing Response Rates and Nonresponse Bias in Web and Paper Surveys , 2003 .

[14]  Walter P. Vispoel Computerized Versus Paper-and-Pencil Assessment of Self-Concept: Score Comparability and Respondent Preferences , 2000 .

[15]  T. Little Mean and Covariance Structures (MACS) Analyses of Cross-Cultural Data: Practical and Theoretical Issues. , 1997, Multivariate behavioral research.

[16]  Margie L. Tomsic,et al.  A World Wide Web Response to Student Satisfaction Surveys: Comparisons Using Paper and Internet Formats. , 2000 .

[17]  David J. Solomon,et al.  Conducting Web-based Surveys. , 2001 .

[18]  A. Satorra,et al.  Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study. , 1991, The British journal of mathematical and statistical psychology.

[19]  John C. Hayek,et al.  College Student Responses to Web and Paper Surveys: Does Mode Matter? , 2003 .

[20]  R. Vandenberg,et al.  A Review and Synthesis of the Measurement Invariance Literature: Suggestions, Practices, and Recommendations for Organizational Research , 2000 .

[21]  David Kember,et al.  The influence of active learning experiences on the development of graduate capabilities , 2005 .

[22]  D. Rindskopf Using phantom and imaginary latent variables to parameterize constraints in linear structural models , 1984 .

[23]  Jaclyn Moss,et al.  Use of electronic surveys in course evaluation , 2002, Br. J. Educ. Technol..

[24]  Benjamin H. Layne,et al.  ELECTRONIC VERSUS TRADITIONAL STUDENT RATINGS OF INSTRUCTION , 1999 .

[25]  David Kember,et al.  The Influence of the Teaching and Learning Environment on the Development of Generic Capabilities Needed for a Knowledge-Based Society , 2005 .