Some Unfortunate Consequences of Non-Randomized, Grouped-Item Survey Administration in IS Research

Arranging survey items to group measures of the same construct together has several benefits, including ease of administration and enhanced statistical reliability and validity of constructs. Yet some IS researchers claim this practice contributes to common methods bias and camouflages “true” measures of reliability. Our study takes a new approach to this issue by using a range of IS research measures in an online survey context to contrast grouped-item survey administration with a design in which the ordering of item administration is programmatically re-randomized for each individual subject. We find significant differences in construct reliability between grouped-item and individually randomized treatments as well as strong temporal effects and widespread anomalies related to item-ordering in the grouped-item treatment. Our results suggest the purported benefits of grouped-item surveys are outweighed by hazards these create to the integrity of research findings, and we caution IS researchers against their continued use.

[1]  J. Bert Keats,et al.  Statistical Methods for Reliability Data , 1999 .

[2]  Barbara Bickart,et al.  Carryover and Backfire Effects in Marketing Research , 1993 .

[3]  Eleanor T. Loiacono-Mello,et al.  WEBQUAL TM REVISITED: PREDICTING THE INTENT TO REUSE A WEB SITE , 2002 .

[4]  Ritu Agarwal,et al.  The Role of Innovation Characteristics and Perceived Voluntariness in the Acceptance of Information Technologies , 1997 .

[5]  D. Lasorsa Question-Order Effects in Surveys: The Case of Political Interest, News Attention, and Knowledge , 2003 .

[6]  A. Daly,et al.  Use of the logit scaling approach to test for rank-order and fatigue effects in stated preference data , 1994 .

[7]  R. Tourangeau,et al.  Cognitive Processes Underlying Context Effects in Attitude Measurement , 1988 .

[8]  H. Schuman,et al.  Problems in the Use of Survey Questions to Measure Public Opinion , 1987, Science.

[9]  Roger Tourangeau,et al.  CARRYOVER EFFECTS IN ATTITUDE SURVEYS , 1989 .

[10]  David W. Moore Measuring New Types of Question-Order Effects: Additive and Subtractive , 2002 .

[11]  S. Presser,et al.  Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context , 1996 .

[12]  Dale L. Goodhue,et al.  Development and Measurement Validity of a Task-Technology Fit Instrument for User Evaluations of Inf , 1998 .

[13]  Seymour Sudman,et al.  Thinking about Answers: The Application of Cognitive Process to Survey Methodology , 1996 .

[14]  Zheng Yan,et al.  Factors affecting response rates of the web survey: A systematic review , 2010, Comput. Hum. Behav..

[15]  Darren George,et al.  SPSS for Windows Step by Step: A Simple Guide and Reference , 1998 .

[16]  James Pustejovsky,et al.  Question-order effects in social network name generators , 2009, Soc. Networks.

[17]  Izak Benbasat,et al.  Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation , 1991, Inf. Syst. Res..

[18]  D. Hensher,et al.  Assessing the influence of design dimensions on stated choice experiment estimates , 2005 .

[19]  R. Budd Response Bias and the Theory of Reasoned Action , 1987 .

[20]  Aleksander P. J. Ellis,et al.  Item Placement on a Personality Measure: Effects on Faking Behavior and Test Measurement Properties , 2002, Journal of Personality Assessment.

[21]  Fred D. Davis Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology , 1989, MIS Q..

[22]  Fred D. Davis,et al.  Extrinsic and Intrinsic Motivation to Use Computers in the Workplace1 , 1992 .

[23]  N. Schwarz,et al.  Thinking About Answers: The Application of Cognitive Processes to Survey Methodology , 1995, Quality of Life Research.

[24]  E. Vance Wilson,et al.  Modeling patients' acceptance of provider-delivered e-health. , 2004, Journal of the American Medical Informatics Association : JAMIA.

[25]  Danny Weathers,et al.  Assessing Three Sources of Misresponse to Reversed Likert Items , 2008 .

[26]  Catherine E. Ross,et al.  Eliminating defense and agreement bias from measures of the sense of control : A 2×2 index , 1991 .

[27]  Bert Weijters,et al.  The proximity effect: The role of inter-item distance on reverse-item bias , 2009 .

[28]  Joel Herche,et al.  Reversed-polarity items and scale unidimensionality , 1996 .

[29]  Richard,et al.  Extrinsic and Intrinsic Motivation to Use Computers in the Workplace , 2022 .

[30]  Detmar W. Straub,et al.  Validation Guidelines for IS Positivist Research , 2004, Commun. Assoc. Inf. Syst..

[31]  Gordon B. Davis,et al.  User Acceptance of Information Technology: Toward a Unified View , 2003, MIS Q..

[32]  David A. Hensher,et al.  Not bored yet – revisiting respondent fatigue in stated choice experiments , 2012 .

[33]  Viswanath Venkatesh,et al.  Why Don't Men Ever Stop to Ask for Directions? Gender, Social Influence, and Their Role in Technology Acceptance and Usage Behavior , 2000, MIS Q..

[34]  Terri Gullickson,et al.  Thinking About Answers: The Application of Cognitive Processes to Survey Methodology. , 1997 .

[35]  S. Stigler A Historical View of Statistical Concepts in Psychology and Educational Research , 1992, American Journal of Education.

[36]  Neal Schmitt,et al.  Factors Defined by Negatively Keyed Items: The Result of Careless Respondents? , 1985 .

[37]  Fred D. Davis,et al.  A critical assessment of potential measurement biases in the technology acceptance model: three experiments , 1996, Int. J. Hum. Comput. Stud..

[38]  W. Nelson Statistical Methods for Reliability Data , 1998 .

[39]  Eric T. Bradlow,et al.  Subscale Distance and Item Clustering Effects in Self-Administered Surveys: A New Metric , 2001 .

[40]  Fred D. Davis,et al.  A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies , 2000, Management Science.

[41]  M. Larsen,et al.  The Psychology of Survey Response , 2002 .