Measurement Effects of Survey Mode on the Equivalence of Attitudinal Rating Scale Questions

This study applies ordinal confirmatory factor analysis for multiple groups to assess equivalence of scale, random errors and systematic (nonrandom) errors of attitudinal questions surveyed on rating scales under different survey modes (Face-to-Face [F2F], Telephone, Paper, and Web). Empirical findings from a large-scale experiment are presented. Consistent with theoretical expectations, interviewer- and self-administered surveys measured all assessed questions on systematically different scales, with different systematic bias, and with differing extents of random error. These measurement effects were absent when comparing Paper with Web or F2F with Telephone. It is concluded that modes impact primarily systematic measurement effects affecting multiple items equally. Interviewer- and self-administered modes should only be combined with great care in mixed-mode surveys that focus on attitudinal constructs. Combining Paper and Web or Telephone and F2F are the viable options. Thereby choosing the self-administered modes appears more efficient, because these modes exhibited higher indicator reliabilities (smaller random error) than the interviewer modes.

[1]  F. Thoemmes,et al.  Propensity Score Analysis , 2015 .

[2]  F. Al-Shamali,et al.  Author Biographies. , 2015, Journal of social work in disability & rehabilitation.

[3]  Peter Lynn,et al.  Measurement effects between CAPI and Web questionnaires in the UK Household Longitudinal Study , 2014 .

[4]  Geert Loosveldt,et al.  Evaluating Relative Mode Effects in Mixed-Mode Surveys: , 2013 .

[5]  Natalie Shlomo,et al.  Evaluating, Comparing, Monitoring, and Improving Representativeness of Survey Response Through R‐Indicators and Partial R‐Indicators , 2012 .

[6]  Natalie Shlomo,et al.  Estimation of an indicator of the representativeness of survey response , 2012 .

[7]  Natalie Shlomo,et al.  Indicators for monitoring and improving representativeness of response , 2011 .

[8]  Jeroen K. Vermunt,et al.  Dealing with Extreme Response Style in Cross-Cultural Research: A Restricted Latent Class Factor Analysis Approach , 2011 .

[9]  J. Vermunt,et al.  Measurement Equivalence of Ordinal Items: A Comparison of Factor Analytic, Item Response Theory, and Latent Class Approaches , 2011 .

[10]  E. S. Kim,et al.  Testing Measurement Invariance: A Comparison of Multiple-Group Categorical CFA and IRT , 2011 .

[11]  G. Loosveldt,et al.  Assessing Mode Effects in a National Crime Victimization Survey using Structural Equation Models: Social Desirability Bias and Acquiescence , 2011 .

[12]  Jelke Bethlehem,et al.  Handbook of Nonresponse in Household Surveys , 2011 .

[13]  Weighting to Adjust for Non-observation Errors in Telephone Surveys , 2011 .

[14]  Joop J. Hox,et al.  Emulating interviewers in an online survey: Experimental manipulation of 'do-not-know' over the phone and on the web , 2011 .

[15]  Alexander Kukush,et al.  Measurement Error Models , 2011, International Encyclopedia of Statistical Science.

[16]  Angela P. Wetzel Internet, mail, and mixed‐mode surveys: The tailored design method , 2010 .

[17]  Peter Lynn,et al.  Assessing the Effect of Data Collection Mode on Measurement , 2010 .

[18]  Geert Molenberghs,et al.  A Method for Evaluating Mode Effects in Mixed-mode Surveys , 2010 .

[19]  J. Krosnick,et al.  National Surveys Via Rdd Telephone Interviewing Versus the Internet Comparing Sample Representativeness and Response Quality , 2009 .

[20]  Filip Lievens,et al.  Measurement Equivalence of Paper‐and‐Pencil and Internet Organisational Surveys: A Large Scale Examination in 16 Countries , 2009 .

[21]  Robert D. Tortora,et al.  Response rate and measurement differences in mixed-mode surveys using mail, telephone, interactive voice response (IVR) and the Internet , 2009 .

[22]  Geert Loosveldt,et al.  Face-to-Face versus Web Surveying in a High-Internet-Coverage Population Differences in Response Quality , 2008 .

[23]  E. Davidov,et al.  Testing the Stability of an Acquiescence Style Factor Behind Two Interrelated Substantive Variables in a Panel Design , 2008 .

[24]  Edith D. de Leeuw,et al.  Choosing the Method of Data Collection , 2008 .

[25]  B. French,et al.  Multigroup Confirmatory Factor Analysis: Locating the Invariant Referent Sets , 2008 .

[26]  Akihito Kamata,et al.  A Note on the Relation Between Factor Analytic and Item Response Theory Models , 2008 .

[27]  Jessica Greene,et al.  Telephone and web: mixed-mode challenge. , 2008, Health services research.

[28]  D. Dillman,et al.  International handbook of survey methodology. , 2008 .

[29]  Roger E. Millsap,et al.  Detecting Violations of Factorial Invariance Using Data-Based Specification Searches: A Monte Carlo Study , 2007 .

[30]  F. Chen Sensitivity of Goodness of Fit Indexes to Lack of Measurement Invariance , 2007 .

[31]  Karin Braunsberger,et al.  A comparison of reliability between telephone and web-based surveys. , 2007 .

[32]  Jolene D. Smyth,et al.  The Effects of Mode and Format on Answers to Scalar Questions in Telephone and Web Surveys , 2007 .

[33]  Willem E. Saris,et al.  Estimation of the effects of measurement characteristics on the quality of survey questions , 2007 .

[34]  K. G. J8reskoC,et al.  Simultaneous Factor Analysis in Several Populations , 2007 .

[35]  Fritz Drasgow,et al.  Detecting differential item functioning with confirmatory factor analysis and item response theory: toward a unified strategy. , 2006, The Journal of applied psychology.

[36]  Arthur G. Bedeian,et al.  The Measurement Equivalence of Web-Based and Paper-and-Pencil Measures of Transformational Leadership , 2006 .

[37]  Martin Wetzels,et al.  An Assessment of Equivalence Between Online and Mail Surveys in Service Research , 2006 .

[38]  Scott Fricker,et al.  An Experimental Comparison of Web and Telephone Surveys , 2005 .

[39]  Michael W. Link,et al.  Alternative Modes for Health Surveillance Surveys: An Experiment with Web, Mail, and Telephone , 2005, Epidemiology.

[40]  A. Bowling Mode of questionnaire administration can have serious effects on data quality. , 2005, Journal of public health.

[41]  Tihomir Asparouhov,et al.  Sampling Weights in Latent Variable Modeling , 2005 .

[42]  John A. Johnson,et al.  Implementing a five-factor personality inventory for use on the internet , 2005 .

[43]  Stan Lipovetsky,et al.  Generalized Latent Variable Modeling: Multilevel,Longitudinal, and Structural Equation Models , 2005, Technometrics.

[44]  Shawn A. Ross,et al.  Survey Methodology , 2005, The SAGE Encyclopedia of the Sociology of Religion.

[45]  E. Leeuw,et al.  To mix or not to mix data collection modes in surveys. , 2005 .

[46]  Adam W. Meade,et al.  A Comparison of Item Response Theory and Confirmatory Factor Analytic Methodologies for Establishing Measurement Equivalence/Invariance , 2004 .

[47]  F. Conrad,et al.  Spacing, Position, and Order Interpretive Heuristics for Visual Features of Survey Questions , 2004 .

[48]  Roger E. Millsap,et al.  Assessing Factorial Invariance in Ordered-Categorical Measures , 2004 .

[49]  Matthias Schonlau,et al.  A Comparison Between Responses From a Propensity-Weighted Web Survey and an Identical RDD Survey , 2004 .

[50]  Bart Cambré,et al.  Adjustment for Acquiescence in the Assessment of the Construct Equivalence of Likert-Type Score Items , 2003 .

[51]  Melanie C. Green,et al.  Telephone versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias , 2003 .

[52]  J. Parsons Margins of Error. , 2003 .

[53]  M. Larsen,et al.  The Psychology of Survey Response , 2002 .

[54]  Bengt,et al.  Latent Variable Analysis With Categorical Outcomes : Multiple-Group And Growth Modeling In Mplus , 2002 .

[55]  G. A. Marcoulides,et al.  Multilevel Analysis Techniques and Applications , 2002 .

[56]  Jaak Billiet,et al.  Modeling Acquiescence in Measurement Models for Two Balanced Sets of Items , 2000 .

[57]  R. Vandenberg,et al.  A Review and Synthesis of the Measurement Invariance Literature: Suggestions, Practices, and Recommendations for Organizational Research , 2000 .

[58]  An Extension of the Propensity Score Adjustment Method for the Analysis of Group Differences in MIMIC Models. , 1999, Multivariate behavioral research.

[59]  K. Bollen,et al.  Detection and determinants of bias in subjective measures , 1998 .

[60]  Frans J. Oort,et al.  Simulation study of item bias detection with restricted factor analysis , 1998 .

[61]  Darren W. Davis,et al.  Nonrandom Measurement Error and Race of Interviewer Effects Among African Americans , 1997 .

[62]  Annette Scherpenzeel,et al.  The Validity and Reliability of Survey Questions , 1997 .

[63]  B. Muthén,et al.  Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes , 1997 .

[64]  EDITH D. DE LEEUW,et al.  The Influence of Data Collection Method on Structural Models , 1996 .

[65]  Donald P. Green,et al.  Measurement Error and the Structure of Attitudes: Are Positive and Negative Judgments Opposites? , 1994 .

[66]  W. Meredith Measurement invariance, factor analysis and factorial invariance , 1993 .

[67]  D. Leeuw,et al.  Data Quality in Mail, Telephone and Face to Face Surveys. , 1992 .

[68]  J. Krosnick Response strategies for coping with the cognitive demands of attitude measures in surveys , 1991 .

[69]  Paul P. Biemer,et al.  Approaches to the Modeling of Measurement Errors , 1990 .

[70]  Gideon J. Mellenbergh,et al.  Item bias and item response theory , 1989 .

[71]  P. Rosenbaum Model-Based Direct Adjustment , 1987 .

[72]  F. M. Andrews Construct Validity and Error Components of Survey Measures: A Structural Modeling Approach , 1984 .

[73]  James C. Anderson,et al.  On the Meaning of Within-Factor Correlated Measurement Errors , 1984 .

[74]  B. Muthén A general structural equation model with dichotomous, ordered categorical, and continuous latent variable indicators , 1984 .

[75]  D. Rubin,et al.  The central role of the propensity score in observational studies for causal effects , 1983 .

[76]  H. Blalock A Causal Approach to Nonrandom Measurement Errors , 1970, American Political Science Review.

[77]  M. R. Novick,et al.  Statistical Theories of Mental Test Scores. , 1971 .