Beyond the Randomized Controlled Trial: A Review of Alternatives in mHealth Clinical Trial Methods

Background Randomized controlled trials (RCTs) have long been considered the primary research study design capable of eliciting causal relationships between health interventions and consequent outcomes. However, with a prolonged duration from recruitment to publication, high-cost trial implementation, and a rigid trial protocol, RCTs are perceived as an impractical evaluation methodology for most mHealth apps. Objective Given the recent development of alternative evaluation methodologies and tools to automate mHealth research, we sought to determine the breadth of these methods and the extent that they were being used in clinical trials. Methods We conducted a review of the ClinicalTrials.gov registry to identify and examine current clinical trials involving mHealth apps and retrieved relevant trials registered between November 2014 and November 2015. Results Of the 137 trials identified, 71 were found to meet inclusion criteria. The majority used a randomized controlled trial design (80%, 57/71). Study designs included 36 two-group pretest-posttest control group comparisons (51%, 36/71), 16 posttest-only control group comparisons (23%, 16/71), 7 one-group pretest-posttest designs (10%, 7/71), 2 one-shot case study designs (3%, 2/71), and 2 static-group comparisons (3%, 2/71). A total of 17 trials included a qualitative component to their methodology (24%, 17/71). Complete trial data collection required 20 months on average to complete (mean 21, SD 12). For trials with a total duration of 2 years or more (31%, 22/71), the average time from recruitment to complete data collection (mean 35 months, SD 10) was 2 years longer than the average time required to collect primary data (mean 11, SD 8). Trials had a moderate sample size of 112 participants. Two trials were conducted online (3%, 2/71) and 7 trials collected data continuously (10%, 7/68). Onsite study implementation was heavily favored (97%, 69/71). Trials with four data collection points had a longer study duration than trials with two data collection points: F4,56=3.2, P=.021, η2=0.18. Single-blinded trials had a longer data collection period compared to open trials: F2,58=3.8, P=.028, η2=0.12. Academic sponsorship was the most common form of trial funding (73%, 52/71). Trials with academic sponsorship had a longer study duration compared to industry sponsorship: F2,61=3.7, P=.030, η2=0.11. Combined, data collection frequency, study masking, sample size, and study sponsorship accounted for 32.6% of the variance in study duration: F4,55=6.6, P<.01, adjusted r2=.33. Only 7 trials had been completed at the time this retrospective review was conducted (10%, 7/71). Conclusions mHealth evaluation methodology has not deviated from common methods, despite the need for more relevant and timely evaluations. There is a need for clinical evaluation to keep pace with the level of innovation of mHealth if it is to have meaningful impact in informing payers, providers, policy makers, and patients.

[1]  D. Ben-Zeev,et al.  Strategies for mHealth Research: Lessons from 3 Mobile Intervention Studies , 2015, Administration and Policy in Mental Health and Mental Health Services Research.

[2]  Jaeho Lee,et al.  Developing a Framework for Evaluating the Patient Engagement, Quality, and Safety of Mobile Health Applications. , 2016, Issue brief.

[3]  Ambuj Tewari,et al.  Microrandomized trials: An experimental design for developing just-in-time adaptive interventions. , 2015, Health psychology : official journal of the Division of Health Psychology, American Psychological Association.

[4]  T. Greenhalgh,et al.  Why Do Evaluations of eHealth Programs Fail? An Alternative Set of Guiding Principles , 2010, PLoS medicine.

[5]  Heather J Ross,et al.  Mobile Phone-Based Telemonitoring for Heart Failure Management: A Randomized Controlled Trial , 2012, Journal of medical Internet research.

[6]  Claudia Pagliari,et al.  Design and Evaluation in eHealth: Challenges and Implications for an Interdisciplinary Field , 2007, Journal of medical Internet research.

[7]  G. Guyatt,et al.  The influence of study characteristics on reporting of subgroup analyses in randomised controlled trials: systematic review , 2011, BMJ : British Medical Journal.

[8]  Bonnie Kaplan,et al.  Evaluating informatics applications - some alternative approaches: theory, social interactionism, and call for methodological pluralism , 2001, Int. J. Medical Informatics.

[9]  John Torous,et al.  Towards a Framework for Evaluating Mobile Mental Health Apps. , 2015, Telemedicine journal and e-health : the official journal of the American Telemedicine Association.

[10]  D. Bates,et al.  In search of a few good apps. , 2014, JAMA.

[11]  E. Coiera Four rules for the reinvention of health care , 2004, BMJ : British Medical Journal.

[12]  J. Cafazzo,et al.  Design of an mHealth App for the Self-management of Adolescent Type 1 Diabetes: A Pilot Study , 2012, Journal of medical Internet research.

[13]  Joseph A Cafazzo,et al.  A Smartphone-Based Pain Management App for Adolescents With Cancer: Establishing System Requirements and a Pain Care Algorithm Based on Literature Review, Interviews, and Consensus , 2014, JMIR research protocols.

[14]  Mi-Kyung Song,et al.  Clinical trials of health information technology interventions intended for patient use: Unique issues and considerations , 2013, Clinical trials.

[15]  J. Ioannidis Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. , 1998, JAMA.

[16]  S. Goyal,et al.  Uptake of a Consumer-Focused mHealth Application for the Assessment and Prevention of Heart Disease: The <30 Days Study , 2016, JMIR mHealth and uHealth.

[17]  S. Murphy,et al.  A "SMART" design for building individualized treatment sequences. , 2012, Annual review of clinical psychology.

[18]  A R Jadad,et al.  Assessing the quality of reports of randomized clinical trials: is blinding necessary? , 1996, Controlled clinical trials.

[19]  Eric J. Topol,et al.  The emerging field of mobile health , 2015, Science Translational Medicine.

[20]  William T Riley,et al.  Trials of Intervention Principles: Evaluation Methods for Evolving Behavioral Intervention Technologies , 2015, Journal of medical Internet research.

[21]  T. Greenhalgh,et al.  Realist RCTs of complex interventions - an oxymoron. , 2013, Social science & medicine.

[22]  Amanda Burls,et al.  Exploring the Usability of a Mobile App for Adolescent Obesity Management , 2014, JMIR mHealth and uHealth.

[23]  S. Steinhubl,et al.  Development of a Weight Loss Mobile App Linked With an Accelerometer for Use in the Clinic: Usability, Acceptability, and Early Testing of its Impact on the Patient-Doctor Relationship , 2016, JMIR mHealth and uHealth.

[24]  John Torous,et al.  The digital placebo effect: mobile mental health meets clinical psychiatry. , 2016, The lancet. Psychiatry.

[25]  S. Friend App-enabled trial participation: Tectonic shift or tepid rumble? , 2015, Science Translational Medicine.

[26]  S. Piantadosi Clinical Trials : A Methodologic Perspective , 2005 .

[27]  S. Stansfeld,et al.  Feasibility and Efficacy of an mHealth Game for Managing Anxiety: "Flowy" Randomized Controlled Pilot Trial and Design Evaluation. , 2016, Games for health journal.

[28]  Joseph A Cafazzo,et al.  Challenges and Paradoxes of Human Factors in Health Technology Design , 2016, JMIR human factors.

[29]  Bambang Parmanto,et al.  A Persuasive and Social mHealth Application for Physical Activity: A Usability and Feasibility Study , 2014, JMIR mHealth and uHealth.

[30]  M J Ball,et al.  Healthcare Informatics , 2009, Encyclopedia of Database Systems.

[31]  Jelena Mirkovic,et al.  Supporting Cancer Patients in Illness Management: Usability Evaluation of a Mobile App , 2014, JMIR mHealth and uHealth.

[32]  Amy P Abernethy,et al.  Rapid, responsive, relevant (R3) research: a call for a rapid learning health research enterprise , 2013, Clinical and Translational Medicine.

[33]  PhamQuynh,et al.  Feasibility and Efficacy of an mHealth Game for Managing Anxiety: "Flowy" Randomized Controlled Pilot Trial and Design Evaluation. , 2016 .

[34]  Joseph A Cafazzo,et al.  Development of a Wearable Cardiac Monitoring System for Behavioral Neurocardiac Training: A Usability Study , 2016, JMIR mHealth and uHealth.

[35]  David C Mohr,et al.  Continuous evaluation of evolving behavioral intervention technologies. , 2013, American journal of preventive medicine.

[36]  Petra Kaufmann,et al.  Experimental And Quasi Experimental Designs For Research , 2016 .

[37]  Audie A Atienza,et al.  Mobile health technology evaluation: the mHealth evidence workshop. , 2013, American journal of preventive medicine.

[38]  Vijay N. Nair,et al.  A strategy for optimizing and evaluating behavioral interventions , 2005, Annals of behavioral medicine : a publication of the Society of Behavioral Medicine.

[39]  Jochen R. Moehr Evaluation: salvation or nemesis of medical informatics? , 2002, Comput. Biol. Medicine.

[40]  Jennifer Jardine,et al.  Apple’s ResearchKit: smart data collection for the smartphone era? , 2015, Journal of the Royal Society of Medicine.

[41]  Catherine H. Yu,et al.  The Systematic Design of a Behavioural Mobile Health Application for the Self-Management of Type 2 Diabetes. , 2016, Canadian journal of diabetes.