What are the Links in a Web Survey Among Response Time, Quality, and Auto-Evaluation of the Efforts Done?

Evaluating the quality of the data is a key preoccupation for researchers to be confident in their results. When web surveys are used, it seems even more crucial since the researchers have less control on the data collection process. However, they also have the possibility to collect some paradata that may help evaluating the quality. Using this paradata, it was noticed that some respondents of web panels are spending much less time than expected to complete the surveys. This creates worries about the quality of the data obtained. Nevertheless, not much is known about the link between response times (RTs) and quality. Therefore, the goal of this study is to look at the link between the RTs of respondents in an online survey and other more usual indicators of quality used in the literature: properly following an instructional manipulation check, coherence and precision of answers, absence of straight-lining, and so on. Besides, we are also interested in the link of RT and the quality indicators with respondents’ auto-evaluation of the efforts they did to answer the survey. Using a structural equation modeling approach that allows separating the structural and the measurement models and controlling for potential spurious effects, we find a significant relationship between RT and quality in the three countries studied. We also find a significant, but lower, relationship between RT and auto-evaluation. However, we did not find a significant link between auto-evaluation and quality.

[1]  J. Krosnick,et al.  Comparing Telephone and Face to Face Interviewing in Terms of Data Quality: The 1982 National Election Studies Method Comparison Project , 1999 .

[2]  J. Krosnick Response strategies for coping with the cognitive demands of attitude measures in surveys , 1991 .

[3]  R. Tourangeau,et al.  Fast times and easy questions: the effects of age, experience and question complexity on web survey response times , 2008 .

[4]  Willem E. Saris,et al.  The development of the program SQP 2.0 for the prediction of the quality of survey questions , 2011 .

[5]  Adam J. Berinsky,et al.  Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self‐Administered Surveys , 2014 .

[6]  J. Krosnick,et al.  AN EVALUATION OF A COGNITIVE THEORY OF RESPONSE-ORDER EFFECTS IN SURVEY MEASUREMENT , 1987 .

[7]  Neil Malhotra,et al.  Completion Time and Response Order Effects in Web Surveys , 2008 .

[8]  Jon A. Krosnick,et al.  A TEST OF THE FORM-RESISTANT CORRELATION HYPOTHESIS RATINGS, RANKINGS, AND THE MEASUREMENT OF VALUES , 1988 .

[9]  Duane F. Alwin Margins of Error: A Study of Reliability in Survey Measurement , 2007 .

[10]  Willem E. Saris,et al.  Testing Structural Equation Models or Detection of Misspecifications? , 2009 .

[11]  M. Larsen,et al.  The Psychology of Survey Response , 2002 .

[12]  F. Al-Shamali,et al.  Author Biographies. , 2015, Journal of social work in disability & rehabilitation.

[13]  Roger Tourangeau,et al.  The Design of Grids in Web Surveys , 2013, Social science computer review.

[14]  Karl G. Jöreskog,et al.  LISREL 7: A guide to the program and applications , 1988 .

[15]  M. Couper A REVIEW OF ISSUES AND APPROACHES , 2000 .

[16]  Daniel M. Oppenheimer,et al.  Instructional Manipulation Checks: Detecting Satisficing to Increase Statistical Power , 2009 .

[17]  Willem E. Saris,et al.  Design, Evaluation, and Analysis of Questionnaires for Survey Research: Saris/Design , 2007 .

[18]  Jon A. Krosnick,et al.  The Reliability of Survey Attitude Measurement , 1991 .

[19]  Willem E. Saris,et al.  Design, Evaluation, and Analysis of Questionnaires for Survey Research , 2007 .

[20]  Chan Zhang,et al.  Satisficing in Web Surveys: Implications for Data Quality and Strategies for Reduction. , 2013 .