Sensitive Questions in Online Surveys: An Experimental Evaluation of the Randomized Response Technique and the Crosswise Model

Self-administered online surveys provide a higher level of privacy protection to respondents than surveys administered by an interviewer. Yet, studies indicate that asking sensitive questions is problematic also in self-administered surveys. Because respondents might not be willing to reveal the truth and provide answers that are subject to social desirability bias, the validity of prevalence estimates of sensitive behaviors from online surveys can be challenged. A well-known method to overcome these problems is the Randomized Response Technique (RRT). However, convincing evidence that the RRT provides more valid estimates than direct questioning in online surveys is still lacking. A new variant of the RRT called the Crosswise Model has recently been proposed to overcome some of the deficiencies of existing RRT designs. We therefore conducted an experimental study in which different implementations of the RRT, including two implementations of the crosswise model, were tested and compared to direct questioning. Our study is a large-scale online survey (N = 6,037) on sensitive behaviors by students such as cheating in exams and plagiarism. Results indicate that the crosswise-model RRT---unlike the other variants of RRT we evaluated---yields higher prevalence estimates of sensitive behaviors than direct questioning. Whether higher estimates are a sufficient condition for more valid results, however, remains questionable.

[1]  Gerty J. L. M. Lensvelt-Mulders,et al.  A Note on a Simple and Practical Randomized Response Framework for Eliciting Sensitive Dichotomous and Quantitative Information , 2010, 1909.11566.

[2]  Ben Jann RRLOGIT: Stata module to estimate logistic regression for randomized response data , 2005 .

[3]  L. Rips,et al.  The Psychology of Survey Response , 2000 .

[4]  G. Tian,et al.  Two new models for survey sampling with sensitive characteristic: design and analysis , 2008 .

[5]  F. Kreuter,et al.  Social Desirability Bias in CATI, IVR, and Web Surveys The Effects of Mode and Question Sensitivity , 2008 .

[6]  R. Tourangeau,et al.  Sensitive questions in surveys. , 2007, Psychological bulletin.

[7]  Asking Sensitive Questions Using the Crosswise Model: Some Experimental Results , 2009 .

[8]  C. Peeters Measuring politically sensitive behavior. Using probability theory in the form of randomized response to estimate prevalence and incidence of misbehavior in the public sphere: a test on integrity violations. , 2006 .

[9]  Felix Wolter,et al.  Asking Sensitive Questions , 2013 .

[10]  Thomas L. Watt,et al.  ON STANDARD DEFINITIONS , 1964 .

[11]  Johannes A. Landsheer,et al.  Trust and Understanding, Two Psychological Aspects of Randomized Response , 1999 .

[12]  Ben Jann,et al.  Online Survey on "Exams and Written Papers". Documentation , 2014 .

[13]  J. Hox,et al.  A Comparison of Randomized Response, Computer-Assisted Self-Interview, and Face-to-Face Direct Questioning , 2000 .

[14]  Douglas Currivan,et al.  Methods for Testing and Evaluating Survey Questionnaires (review) , 2006 .

[15]  S. Edgell,et al.  Validity of Forced Responses in a Randomized Response Model , 1982 .

[16]  Jochen Musch,et al.  Surveying Multiple Sensitive Attributes using an Extension of the Randomized-Response Technique , 2012 .

[17]  Ben Jann RRREG: Stata module to estimate linear probability model for randomized response data , 2011 .

[18]  D. Crown,et al.  Learning from the Literature on Collegiate Cheating: A Review of Empirical Research , 1998 .

[19]  Linda Klebe Trevino,et al.  Cheating in Academic Institutions: A Decade of Research , 2001 .

[20]  Peter Preisendörfer,et al.  Who Is Telling the Truth? A Validation Study on Determinants of Response Behavior in Surveys , 2014 .

[21]  U. N. Umesh,et al.  Randomized Response: A Method for Sensitive Surveys , 1986 .

[22]  Ben Jann,et al.  Estimating the Prevalence of Illicit Drug Use Among Students Using the Crosswise Model , 2014, Substance use & misuse.

[23]  W. R. Simmons,et al.  The Unrelated Question Randomized Response Model: Theoretical Framework , 1969 .

[24]  S L Warner,et al.  Randomized response: a survey technique for eliminating evasive answer bias. , 1965, Journal of the American Statistical Association.

[25]  Jean-Paul Fox,et al.  Reducing Social Desirability Bias through Item Randomized Response: An Application to Measure Underreported Desires , 2010 .

[26]  Peter G. M. van der Heijden,et al.  A validation of a computer‐assisted randomized response survey to estimate the prevalence of fraud in social security , 2006 .

[27]  Gary King,et al.  Measuring Voter Turnout by Using the Randomized Response Technique: Evidence Calling into Question the Method's Validity , 2010 .

[28]  Gerty J. L. M. Lensvelt-Mulders,et al.  Evaluating compliance with a computer assisted randomized response technique: a qualitative study into the origins of lying and cheating , 2007, Comput. Hum. Behav..

[29]  A. Cameron,et al.  Microeconometrics: Methods and Applications , 2005 .

[30]  Ben Jann,et al.  Sensitive Questions in Online Surveys: Experimental Results for the Randomized Response Technique (RRT) and the Unmatched Count Technique (UCT) , 2011 .

[31]  Jochen Musch,et al.  A stochastic lie detector , 2012, Behavior research methods.

[32]  Robert A. Desharnais,et al.  Honest Answers to Embarrassing Questions: Detecting Cheating in the Randomized Response Model , 1998 .

[33]  Andreas Diekmann,et al.  Making Use of “Benford’s Law” for the Randomized Response Technique , 2012 .

[34]  Peter G. M. van der Heijden,et al.  Meta-Analysis of Randomized Response Research , 2005 .

[35]  Arijit Chaudhuri,et al.  Randomized Response and Indirect Questioning Techniques in Surveys , 2010 .