New non-randomised model to assess the prevalence of discriminating behaviour: a pilot study on mephedrone

BackgroundAn advantage of randomised response and non-randomised models investigating sensitive issues arises from the characteristic that individual answers about discriminating behaviour cannot be linked to the individuals. This study proposed a new fuzzy response model coined 'Single Sample Count' (SSC) to estimate prevalence of discriminating or embarrassing behaviour in epidemiologic studies.MethodsThe SSC was tested and compared to the established Forced Response (FR) model estimating Mephedrone use. Estimations from both SSC and FR were then corroborated with qualitative hair screening data. Volunteers (n = 318, mean age = 22.69 ± 5.87, 59.1% male) in a rural area in north Wales and a metropolitan area in England completed a questionnaire containing the SSC and FR in alternating order, and four questions canvassing opinions and beliefs regarding Mephedrone. Hair samples were screened for Mephedrone using a qualitative Liquid Chromatography-Mass Spectrometry method.ResultsThe SSC algorithm improves upon the existing item count techniques by utilizing known population distributions and embeds the sensitive question among four unrelated innocuous questions with binomial distribution. Respondents are only asked to indicate how many without revealing which ones are true. The two probability models yielded similar estimates with the FR being between 2.6% - 15.0%; whereas the new SSC ranged between 0% - 10%. The six positive hair samples indicated that the prevalence rate in the sample was at least 4%. The close proximity of these estimates provides evidence to support the validity of the new SSC model. Using simulations, the recommended sample sizes as the function of the statistical power and expected prevalence rate were calculated.ConclusionThe main advantages of the SSC over other indirect methods are: simple administration, completion and calculation, maximum use of the data and good face validity for all respondents. Owing to the key feature that respondents are not required to answer the sensitive question directly, coupled with the absence of forced response or obvious self-protective response strategy, the SSC has the potential to cut across self-protective barriers more effectively than other estimation models. This elegantly simple, quick and effective method can be successfully employed in public health research investigating compromising behaviours.

[1]  Ulf Böckenholt,et al.  Do randomized-response designs eliminate response biases? An empirical study of non-compliance behavior , 2009 .

[2]  Zhi Geng,et al.  A new non‐randomized model for analysing sensitive questions with binary outcomes , 2007, Statistics in medicine.

[3]  Tamás Nepusz,et al.  Methodological considerations regarding response bias effect in substance use research: is correlation between the measured variables sufficient? , 2011, Substance abuse treatment, prevention, and policy.

[4]  Simon D Brandt,et al.  Analyses of second-generation 'legal highs' in the UK: initial findings. , 2010, Drug testing and analysis.

[5]  Min Sok Lee,et al.  A penny for your thoughts: Inducing truth-telling in stated preference elicitation , 2010 .

[6]  Hennie Boeije,et al.  Honest by Chance: a Qualitative Interview Study to Clarify Respondents' (Non-)Compliance With Computer-Assisted Randomized Response , 2002 .

[7]  R. Tourangeau,et al.  Sensitive questions in surveys. , 2007, Psychological bulletin.

[8]  Man-Lai Tang,et al.  Sample size determination for the non-randomised triangular model for sensitive questions in a survey , 2011, Statistical methods in medical research.

[9]  Samuel Himmelfarb,et al.  The Multi-Item Randomized Response Technique , 2008 .

[10]  Ulf Böckenholt,et al.  Item Randomized-Response Models for Measuring Noncompliance: Risk-Return Perceptions, Social Influences, and Self-Protective Responses , 2007 .

[11]  J. Sonnemans,et al.  A Truth Serum for Non-Bayesians: Correcting Proper Scoring Rules for Risk Attitudes* , 2009 .

[12]  Ming Tan,et al.  Sample Surveys With Sensitive Questions: A Nonrandomized Response Approach , 2009 .

[13]  Jean-Paul Fox,et al.  A Mixed Effects Randomized Item Response Model , 2008 .

[14]  T. Nepusz,et al.  Virtue or Pretense? Looking behind Self-Declared Innocence in Doping , 2010, PloS one.

[15]  Harold Sigall,et al.  The bogus pipeline: A new paradigm for measuring affect and attitude. , 1971 .

[16]  L. Snyder Health communication campaigns and their impact on behavior. , 2007, Journal of nutrition education and behavior.

[17]  Peter G. M. van der Heijden,et al.  Meta-Analysis of Randomized Response Research , 2005 .

[18]  Anthony Y. C. Kuk,et al.  Asking sensitive questions indirectly , 1990 .

[19]  Wei Pan,et al.  A Multivariate Approach to a Meta-Analytic Review of the Effectiveness of the D.A.R.E. Program , 2009, International journal of environmental research and public health.

[20]  A. Petróczi,et al.  Impact of multidisciplinary research on advancing anti-doping efforts , 2011 .

[21]  Wendy Visscher,et al.  The Item Count Technique as a Method of Indirect Questioning: A Review of Its Development and a Case Study Application , 2011 .

[22]  D. Lehmann,et al.  Designing Effective Health Communications: A Meta-Analysis , 2008 .

[23]  P.G.M. Van der Heijden,et al.  Repeated Cross-Sectional Randomized Response Data Taking Design Change and Self-Protective Responses into Account , 2009 .

[24]  Gerty J. L. M. Lensvelt-Mulders,et al.  Evaluating compliance with a computer assisted randomized response technique: a qualitative study into the origins of lying and cheating , 2007, Comput. Hum. Behav..

[25]  Michael D. Buhrmester,et al.  Amazon's Mechanical Turk , 2011, Perspectives on psychological science : a journal of the Association for Psychological Science.

[26]  Ben Jann,et al.  Sensitive Questions in Online Surveys: Experimental Results for the Randomized Response Technique (RRT) and the Unmatched Count Technique (UCT) , 2011 .

[27]  Paul T. Costa,et al.  Social desirability scales: More substance than style. , 1983 .

[28]  T. Nepusz,et al.  Incongruence in Doping Related Attitudes, Beliefs and Opinions in the Context of Discordant Behavioural Data: In Which Measure Do We Trust? , 2011, PloS one.

[29]  Takahiro Tsuchiya,et al.  A Study of the Properties of the Item Count Technique , 2007 .

[30]  A. Winstock,et al.  Mephedrone: still available and twice the price , 2010, The Lancet.

[31]  G. Tian,et al.  Two new models for survey sampling with sensitive characteristic: design and analysis , 2008 .

[32]  Joel R. Grossbard,et al.  Normative Misperceptions and Marijuana Use Among Male and Female College Athletes , 2009, Journal of applied sport psychology.

[33]  I. Vardakou,et al.  Drugs for youth via Internet and the example of mephedrone. , 2011, Toxicology letters.

[34]  Kelly Morris,et al.  UK places generic ban on mephedrone drug family , 2010, The Lancet.

[35]  Ardo van den Hout,et al.  ACCOUNTING FOR NON‐COMPLIANCE IN THE ANALYSIS OF RANDOMIZED RESPONSE DATA , 2009 .

[36]  Dan R. Dalton,et al.  USING THE UNMATCHED COUNT TECHNIQUE (UCT) TO ESTIMATE BASE RATES FOR SENSITIVE BEHAVIOR , 1994 .

[37]  K. Richter,et al.  Substance Abuse Treatment, Prevention, and Policy , 2006 .

[38]  W. R. Simmons,et al.  The Unrelated Question Randomized Response Model: Theoretical Framework , 1969 .

[39]  A. Gregg When vying reveals lying: the timed antagonistic response alethiometer , 2007 .

[40]  Ulf Böckenholt,et al.  Estimating the prevalence of sensitive behaviour and cheating with a dual design for direct questioning and randomized response , 2010, Journal of the Royal Statistical Society. Series C, Applied statistics.

[41]  S L Warner,et al.  Randomized response: a survey technique for eliminating evasive answer bias. , 1965, Journal of the American Statistical Association.

[42]  Peter G. M. van der Heijden,et al.  How to Improve the Efficiency of Randomised Response Designs , 2005 .

[43]  Man-Lai Tang,et al.  Bayesian non-randomized response models for surveys with sensitive questions , 2009 .

[44]  Umberto Castiello,et al.  How to Accurately Detect Autobiographical Events , 2008, Psychological science.

[45]  S. Warner The Linear Randomized Response Model , 1971 .

[46]  Neal J. Roese,et al.  Twenty years of bogus pipeline research : a critical review and meta-analysis , 1993 .

[47]  C. Shaw,et al.  Applying an extended theoretical framework for data collection mode to health services research , 2010, BMC health services research.

[48]  Alison Walker,et al.  Home Office Statistical Bulletin , 2009 .

[49]  D. Prelec A Bayesian Truth Serum for Subjective Data , 2004, Science.

[50]  L. Ross,et al.  The “false consensus effect”: An egocentric bias in social perception and attribution processes , 1977 .

[51]  Ulf Böckenholt,et al.  Log-Linear Randomized-Response Models Taking Self-Protective Response Behavior Into Account , 2007 .

[52]  Robert A. Desharnais,et al.  Honest Answers to Embarrassing Questions: Detecting Cheating in the Randomized Response Model , 1998 .

[53]  Jochen Musch,et al.  Improving self-report measures of medication non-adherence using a cheating detection extension of the randomised-response-technique , 2011, Statistical methods in medical research.

[54]  Brian Borsari,et al.  Descriptive and injunctive norms in college drinking: a meta-analytic integration. , 2003, Journal of studies on alcohol.

[55]  L. Uziel Rethinking Social Desirability Scales , 2010, Perspectives on psychological science : a journal of the Association for Psychological Science.

[56]  C. Merzel,et al.  Reconsidering community-based health promotion: promise, performance, and potential. , 2003, American journal of public health.

[57]  Peter G. M. van der Heijden,et al.  Accounting for self-protective responses in randomized response data from a social security survey using the zero-inflated Poisson model , 2008, 0803.3891.

[58]  Seymour Sudman,et al.  Measurement errors in surveys , 1993 .