A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors

BackgroundSurveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors.MethodsA stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost.ResultsThe online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias.ConclusionsDecisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population.

[1]  P. Larson,et al.  Total cost/response rate trade-offs in mail survey research: impact of follow-up mailings and monetary incentives , 2003 .

[2]  J. Feldman,et al.  Impact Of Differential Response Rates On The Quality Of Data Collected In The CTS Physician Survey , 2003, Evaluation & the health professions.

[3]  A. Scott,et al.  The "Medicine in Australia: Balancing Employment and Life (MABEL)" longitudinal survey - Protocol and baseline data for a prospective cohort study of Australian doctors' workforce participation , 2010, BMC health services research.

[4]  S. Wilson Methods for the economic evaluation of health care programmes , 1987 .

[5]  M. Baker,et al.  Increasing Mail Survey Response Rates from an Industrial Population , 2002 .

[6]  I. Grava-Gubins,et al.  Effects of various methodologic strategies: survey response rates among Canadian physicians and physicians-in-training. , 2008, Canadian family physician Medecin de famille canadien.

[7]  P. De Wals,et al.  Cost-Effectiveness of a Lottery for Increasing Physicians’ Responses to a Mail Survey , 2001, Evaluation & the health professions.

[8]  Leslie E. Papke,et al.  Econometric Methods for Fractional Response Variables with an Application to 401(K) Plan Participation Rates , 1993 .

[9]  M. Clarke,et al.  Methods to increase response to postal and electronic questionnaires , 2023, The Cochrane database of systematic reviews.

[10]  A. Garratt,et al.  Nonresponse Bias and Cost-Effectiveness in a Norwegian Survey of Family Physicians , 2008, Evaluation & the health professions.

[11]  Tracy L. Tuten,et al.  Unit (non)response in Web‐based access panel surveys: An extended planned‐behavior approach , 2005 .

[12]  David M. Shannon,et al.  A Comparison of Response Rate, Response Time, and Costs of Mail and Electronic Surveys , 2002 .

[13]  T. Johnson,et al.  Methodologies for Improving Response Rates in Surveys of Physicians , 2007, Evaluation & the health professions.

[14]  Chris Todd,et al.  Not another questionnaire! Maximizing the response rate, predicting non-response and assessing non-response bias in postal questionnaire studies of GPs. , 2002, Family practice.

[15]  Geert Loosveldt,et al.  Face-to-Face versus Web Surveying in a High-Internet-Coverage Population Differences in Response Quality , 2008 .

[16]  Frederick L. Oswald,et al.  Response Rates for Mixed-Mode Surveys Using Mail and E-mail/Web , 2008 .

[17]  T. Beebe,et al.  Mixing web and mail methods in a survey of physicians. , 2007, Health services research.

[18]  Peter Lynn,et al.  The problem of nonresponse , 2008 .

[19]  D A Asch,et al.  Response rates to mail surveys published in medical journals. , 1997, Journal of clinical epidemiology.

[20]  C. Aitken,et al.  A very low response rate in an on‐line survey of medical practitioners , 2008, Australian and New Zealand journal of public health.

[21]  T. Konrad,et al.  Reported response rates to mailed physician questionnaires. , 2001, Health services research.

[22]  Robert M. Groves,et al.  The Impact of Nonresponse Rates on Nonresponse Bias A Meta-Analysis , 2008 .

[23]  A. Scott,et al.  Medicine in Australia: Balancing Employment and Life (MABEL) , 2011 .

[24]  M. Clarke,et al.  Increasing response rates to postal questionnaires: systematic review , 2002, BMJ : British Medical Journal.

[25]  Tse-Hua Shih,et al.  Comparing Response Rates from Web and Mail Surveys: A Meta-Analysis , 2008 .

[26]  Jelke Bethlehem,et al.  Indicators for the representativeness of survey response , 2009 .

[27]  K. Manfreda,et al.  Web Surveys versus other Survey Modes: A Meta-Analysis Comparing Response Rates , 2008 .

[28]  Mehran S Massoudi,et al.  Comparison of e-mail, fax, and postal surveys of pediatricians. , 2003, Pediatrics.

[29]  Katherine M. James,et al.  Getting physicians to respond: the impact of incentive type and timing on physician survey response rates. , 2011, Health services research.

[30]  E. Leeuw,et al.  To mix or not to mix data collection modes in surveys. , 2005 .

[31]  S. Brophy,et al.  Interventions for latent autoimmune diabetes (LADA) in adults. , 2011, The Cochrane database of systematic reviews.

[32]  Ronald Czaja,et al.  Factors Associated With Response Rates in a National Survey of Primary Care Physicians , 1994 .

[33]  Philip J. Kroth,et al.  Combining Web-Based and Mail Surveys Improves Response Rates: A PBRN Study From PRIME Net , 2009, The Annals of Family Medicine.