Improving SET Response Rates: Synchronous Online Administration as a Tool to Improve Evaluation Quality

Institutions of higher education continue to migrate student evaluations of teaching (SET) from traditional, in-class paper forms to online SETs. Online SETs would favorably compare to paper-and-pencil evaluations were it not for widely reported response rate decreases that cause SET validity concerns stemming from possible nonresponse bias. To combat low response rates, one institution introduced a SET application for mobile devices and piloted formal synchronous classroom time for SET completion. This paper uses the Leverage Salience Theory to estimate the impact of these SET process changes on overall response rates, open-ended question response rates, and open end response word counts. Synchronous class time best improves SET responses when faculty encourage completion on keyboarded devices and provide students SET completion time in the first 15 min of a class meeting. Full support from administrators requires sufficient wireless signal strength, IT infrastructure, and assuring student access to devices for responses clustering around meeting times.

[1]  Marcia J. Belcheir,et al.  The effect of incentives and other instructor-driven strategies to increase online student evaluation response rates , 2015 .

[2]  L. Schmelkin,et al.  Student Perspectives on Teaching and its Evaluation , 2002 .

[3]  Christina Ballantyne,et al.  Online Evaluations of Teaching: An Examination of Current Practice and Considerations for the Future , 2003 .

[4]  Robert W. Hanna,et al.  Gathering faculty teaching evaluations by in‐class and online surveys: their effects on response rates and evaluations , 2004 .

[5]  E. Singer,et al.  Leverage-saliency theory of survey participation: description and an illustration. , 2000, Public opinion quarterly.

[6]  Rosemary J. Avery,et al.  Please Scroll down for Article the Journal of Economic Education Electronic Course Evaluations: Does an Online Delivery System Influence Student Evaluations? , 2022 .

[7]  Shannon K. Gilmartin,et al.  Assessing Response Rates and Nonresponse Bias in Web and Paper Surveys , 2003 .

[8]  Duncan David Nulty,et al.  The adequacy of response rates to online and paper surveys: what can be done? , 2008 .

[9]  Paul D. Umbach,et al.  Nonresponse and Online Student Evaluations of Teaching: Understanding the Influence of Salience, Fatigue, and Academic Environments , 2012 .

[10]  Tena B. Crews,et al.  Online Course Evaluations: Faculty Perspective and Strategies for Improved Response Rates , 2011 .

[11]  Nannette M Berensen,et al.  RESEARCH ARTICLES Comparison of Drug Information Practice Curriculum Components in US Colleges of Pharmacy , 2005 .

[12]  Paulette Laubsch,et al.  Online and In­person Evaluations: A Literature Review and Exploratory Comparison , 2006 .

[13]  J. Goyder,et al.  Face-to-Face Interviews and Mailed Questionnaires: The Net Difference in Response Rate , 1985 .

[14]  Elizabeth J. Whitt,et al.  The Invisible Tapestry: Culture in American Colleges and Universities , 1988 .

[15]  John A. Centra,et al.  Will Teachers Receive Higher Student Evaluations by Giving Higher Grades and Less Course Work? , 2003 .

[16]  Donna M. Talbot,et al.  Understanding Student Evaluations: What All Faculty Should Know , 2001 .

[17]  M. Lindahl,et al.  Cruelty in Student Teaching Evaluations , 2010 .

[18]  L. M. Aleamoni Student Rating Myths Versus Research Facts from 1924 to 1998 , 1999 .

[19]  Anne Heberger Marino,et al.  Using Norm–Based Appeals to Increase Response Rates in Evaluation Research , 2012 .

[20]  Victoria J. Gallagher,et al.  Student evaluations of teaching: the impact of faculty procedures on response rates , 2019 .