Are Fast Responses More Random? Testing the Effect of Response Time on Scale in an Online Choice Experiment

Scepticism over stated preference surveys conducted online revolves around the concerns over “professional respondents” who might rush through the questionnaire without sufficiently considering the information provided. To gain insight on the validity of this phenomenon and test the effect of response time on choice randomness, this study makes use of a recently conducted choice experiment survey on ecological and amenity effects of an offshore windfarm in the UK. The positive relationship between self-rated and inferred attribute attendance and response time is taken as evidence for a link between response time and cognitive effort. Subsequently, the generalised multinomial logit model is employed to test the effect of response time on scale, which indicates the weight of the deterministic relative to the error component in the random utility model. Results show that longer response time increases scale, i.e. decreases choice randomness. This positive scale effect of response time is further found to be non-linear and wear off at some point beyond which extreme response time decreases scale. While response time does not systematically affect welfare estimates, higher response time increases the precision of such estimates. These effects persist when self-reported choice certainty is controlled for. Implications of the results for online stated preference surveys and further research are discussed.

[1]  D. Hensher,et al.  Stated Choice Methods: Analysis and Applications , 2000 .

[2]  Jon A. Krosnick,et al.  Research Synthesis AAPOR Report on Online Panels , 2010 .

[3]  Henrik Lindhjem,et al.  Using Internet in Stated Preference Surveys: A Review and Comparison of Survey Modes , 2011 .

[4]  Roy Brouwer,et al.  Choice Certainty and Consistency in Repeated Choice Experiments , 2010 .

[5]  Thomas C. Brown,et al.  Reliability of individual valuations of public and private goods: Choice consistency, response time, and preference refinement , 2008 .

[6]  Carsten Lynge Jensen,et al.  Attending to the Reasons for Attribute Non-attendance in Choice Experiments , 2011 .

[7]  David A. Hensher,et al.  Modelling attribute non-attendance in choice experiments for rural landscape valuation , 2009 .

[8]  Thomas P. Holmes,et al.  The effect of response time on conjoint analysis estimates of rainforest protection values. , 1998 .

[9]  Raffaele Zanoli,et al.  Inferred and Stated Attribute Non‐Attendance in Food Choice Experiments , 2013 .

[10]  Mickael Bech,et al.  Does the number of choice sets matter? Results from a web survey applying a discrete choice experiment. , 2011, Health economics.

[11]  Christopher M. Fleming,et al.  Web-based surveys as an alternative to traditional mail methods. , 2009, Journal of environmental management.

[12]  A. Collins,et al.  If You Provide It, Will They Read It? Response Time Effects in a Choice Experiment , 2009 .

[13]  J. Nielsen Use of the Internet for willingness-to-pay surveys: A comparison of face-to-face and web-based interviews , 2011 .

[14]  McKenzie Young,et al.  Professional respondents in non-probability online panels , 2014 .

[15]  Anthony J Smith,et al.  Time to Think , 2004, Journal of dental research.

[16]  Mario Callegaro,et al.  Online Panel Research: A Data Quality Perspective , 2014 .

[17]  Jacob Ladenburg,et al.  Willingness to pay for reduced visual disamenities from offshore wind farms in denmark , 2007 .

[18]  D. McFadden Conditional logit analysis of qualitative choice behavior , 1972 .

[19]  Robert Kohn,et al.  Dissecting the Random Component of Utility , 2002 .

[20]  Leif Mattsson,et al.  Discrete choice under preference uncertainty: an improved structural model for contingent valuation. , 1995 .

[21]  Marc Jeuland,et al.  Giving Stated Preference Respondents “Time to Think”: Results From Four Countries , 2010 .

[22]  D. Schwappach,et al.  "Quick and dirty numbers"? The reliability of a stated-preference technique for the measurement of preferences for resource allocation. , 2006, Journal of health economics.

[23]  Tobias Gummer,et al.  Explaining Interview Duration in Web Surveys , 2015 .

[24]  J. Loomis 2013WAEA Keynote Address: Strategies for Overcoming Hypothetical Bias in Stated Preference Surveys , 2014 .

[25]  J. Rose,et al.  Means matter, but variance matter too: Decomposing response latency influences on variance heterogeneity in stated preference experiments , 2006 .

[26]  K. Train,et al.  Mixed Logit with Repeated Choices: Households' Choices of Appliance Efficiency Level , 1998, Review of Economics and Statistics.

[27]  D. Sunshine Hillygus,et al.  Professional respondents in nonprobability online panels , 2014 .

[28]  Jeremy Firestone,et al.  Valuing the Visual Disamenity of Offshore Wind Power Projects at Varying Distances from the Shore: An Application on the Delaware Shoreline , 2011, Land Economics.

[29]  Peter Bonsall,et al.  Factors affecting the amount of effort expended in responding to questions in behavioural choice experiments , 2009 .

[30]  John Rolfe,et al.  Comparing Responses from Internet and Paper-Based Collection Methods in more Complex Stated Preference Environmental Valuation Surveys , 2011 .

[31]  Marit E. Kragt,et al.  Stated and inferred attribute attendance models: A comparison with environmental choice experiments , 2013 .

[32]  J. Gibbons,et al.  The effect of individual 'ability to choose' (scale heterogeneity) on the valuation of environmental goods , 2011 .

[33]  N. Hanley,et al.  We want to sort! – assessing households’ preferences for sorting waste , 2014 .

[34]  Marit E. Kragt,et al.  The Effects of Changing Cost Vectors on Choices and Scale Heterogeneity , 2013 .

[35]  Jacob LaRiviere,et al.  The Effects of Experience on Preferences: Theory and Empirics for Environmental Public Goods , 2015 .

[36]  Arne Risa Hole,et al.  Fitting the Generalized Multinomial Logit Model in Stata , 2013 .

[37]  Stephane Hess,et al.  Linking Response Quality to Survey Engagement: A Combined Random Scale and Latent Variable Approach , 2013 .

[38]  H. Svedsater Ambivalent Statements in Contingent Valuation Studies: Inclusive Response Formats and Giving Respondents Time to Think , 2007 .

[39]  John M. Rose,et al.  Consistently inconsistent: The role of certainty, acceptability and scale in choice , 2013 .

[40]  John M. Rose,et al.  Can scale and coefficient heterogeneity be separated in random coefficients models? , 2012 .

[41]  S. Navrud,et al.  Are Internet Surveys an Alternative to Face-to Face Interviews in Contingent Valuation? , 2011 .

[42]  Nick Hanley,et al.  Controlling for the Effects of Information in a Public Goods Discrete Choice Model , 2014, Environmental and Resource Economics.

[43]  I. Krinsky,et al.  On Approximating the Statistical Properties of Elasticities , 1986 .

[44]  Dale Whittington,et al.  Giving Respondents Time to Think in Contingent Valuation Studies: A Developing Country Application* , 1992 .

[45]  Neil Malhotra,et al.  Completion Time and Response Order Effects in Web Surveys , 2008 .

[46]  Rauli Svento,et al.  Modeling observed and unobserved heterogeneity in choice experiments , 2017 .

[47]  Denzil G. Fiebig,et al.  The Generalized Multinomial Logit Model: Accounting for Scale and Coefficient Heterogeneity , 2010, Mark. Sci..

[48]  J. Martin-Ortega,et al.  Inferring Attribute Non-attendance from Discrete Choice Experiments: Implications for Benefit Transfer , 2015 .

[49]  Michel Wedel,et al.  Response Latencies in the Analysis of Conjoint Choice Experiments , 2000 .

[50]  Søren Bøye Olsen,et al.  Choosing Between Internet and Mail Survey Modes for Choice Experiment Surveys Considering Non-Market Goods , 2009 .

[51]  Arne Risa Hole,et al.  Inferred vs Stated Attribute Non-Attendance in Choice Experiments: A Study of Doctors' Prescription Behaviour , 2012 .

[52]  Dale Whittington,et al.  RELIABILITY OF STATED PREFERENCES FOR CHOLERA AND TYPHOID VACCINES WITH TIME TO THINK IN HUE, VIETNAM , 2006 .

[53]  John M. Rose,et al.  Design Efficiency for Non-Market Valuation with Choice Modelling: How to Measure it, What to Report and Why , 2008 .

[54]  Søren Bøye Olsen,et al.  Handling respondent uncertainty in Choice Experiments: Evaluating recoding approaches against explicit modelling of uncertainty , 2009 .

[55]  Danny Campbell,et al.  How quick can you click? The role of response , 2013 .

[56]  Mark S. McNulty,et al.  Mode Effects and Other Potential Biases in Panel-based Internet Surveys: Final Report , 2009 .