Reliable Data Collection in Participatory Trials to Assess Digital Healthcare Applications

The number of digital healthcare mobile applications in the market is exponentially increasing owing to the development of mobile networks and widespread usage of smartphones. However, only few of these applications have been adequately validated. Like many mobile applications, in general, the use of healthcare applications is considered safe; thus, developers and end users can easily exchange them in the marketplace. However, existing platforms are unsuitable for collecting reliable data for evaluating the effectiveness of the applications. Moreover, these platforms reflect only the perspectives of developers and experts, and not of end users. For instance, typical clinical trial data collection methods are not appropriate for participant-driven assessment of healthcare applications because of their complexity and high cost. Thus, we identified the need for a participant-driven data collection platform for end users that is interpretable, systematic, and sustainable, as a first step to validate the effectiveness of the applications. To collect reliable data in the participatory trial format, we defined distinct stages for data preparation, storage, and sharing. The interpretable data preparation consists of a protocol database system and semantic feature retrieval method that allow a person without professional knowledge to create a protocol. The systematic data storage stage includes calculation of the collected data reliability weight. For sustainable data collection, we integrated a weight method and a future reward distribution function. We validated the methods through statistical tests involving 718 human participants. The results of a validation experiment demonstrate that the compared methods differ significantly and prove that the choice of an appropriate method is essential for reliable data collection, to facilitate effectiveness validation of digital healthcare applications. Furthermore, we created a Web-based system for our pilot platform to collect reliable data in an integrated pipeline. We compared the platform features using existing clinical and pragmatic trial data collection platforms.

[1]  K. Plangger,et al.  Little rewards, big changes: Using exercise analytics to motivate sustainable changes in physical activity , 2019, Inf. Manag..

[2]  Jacob Cohen Statistical Power Analysis for the Behavioral Sciences , 1969, The SAGE Encyclopedia of Research Design.

[3]  J. Ezekowitz,et al.  Trends in the Explanatory or Pragmatic Nature of Cardiovascular Clinical Trials Over 2 Decades. , 2019, JAMA cardiology.

[4]  Seongkuk Park,et al.  An interactive retrieval system for clinical trial studies with context-dependent protocol elements , 2019, bioRxiv.

[5]  D. Shepard,et al.  Financial Incentives to Increase Cardiac Rehabilitation Participation Among Low-Socioeconomic Status Patients: A Randomized Clinical Trial. , 2019, JACC. Heart failure.

[6]  Victor H. Vroom Some Personality Determinants of the Effects of Participation , 2019 .

[7]  Junseok Park,et al.  Concept embedding to measure semantic relatedness for biomedical information ontologies , 2019, J. Biomed. Informatics.

[8]  Robert H. Deng,et al.  CrowdBC: A Blockchain-Based Decentralized Framework for Crowdsourcing , 2019, IEEE Transactions on Parallel and Distributed Systems.

[9]  Adam B. Cohen,et al.  Digital health: a path to validation , 2019, npj Digital Medicine.

[10]  G. Dawson,et al.  Brief Report: Pilot Study of a Novel Interactive Digital Treatment to Improve Cognitive Control in Children with Autism Spectrum Disorder and Co-occurring ADHD Symptoms , 2019, Journal of autism and developmental disorders.

[11]  Sean Khozin,et al.  Developing and adopting safe and effective digital biomarkers to improve patient outcomes , 2019, npj Digital Medicine.

[12]  Sanchita Bhattacharya,et al.  Prototype of running clinical trials in an untrustworthy environment using blockchain , 2019, Nature Communications.

[13]  Bethany S. Jurs,et al.  How serious is the ‘carelessness’ problem on Mechanical Turk? , 2019, International Journal of Social Research Methodology.

[14]  Rosalind W. Picard,et al.  Use of In-Game Rewards to Motivate Daily Self-Report Compliance: Randomized Controlled Trial , 2019, Journal of medical Internet research.

[15]  J. Chung,et al.  Sustainable Growth and Token Economy Design: The Case of Steemit , 2018, Sustainability.

[16]  Ralucca Gera,et al.  A heuristic approach to estimate nodes’ closeness rank using the properties of real world networks , 2018, Social Network Analysis and Mining.

[17]  Junseok Park,et al.  CORUS: Blockchain-Based Trustworthy Evaluation System for Efficacy of Healthcare Remedies , 2018, 2018 IEEE International Conference on Cloud Computing Technology and Science (CloudCom).

[18]  Philip T. Kortum,et al.  Psychometric Evaluation of the USE (Usefulness, Satisfaction, and Ease of use) Questionnaire for Reliability and Validity , 2018, Proceedings of the Human Factors and Ergonomics Society Annual Meeting.

[19]  Philipp Neuhaus,et al.  ODM Data Analysis—A tool for the automatic validation, monitoring and generation of generic descriptive statistics of patient data , 2018, PloS one.

[20]  E. Waltz Pear approval signals FDA readiness for digital treatments , 2018, Nature Biotechnology.

[21]  K. Volpp,et al.  A Pragmatic Trial of E‐Cigarettes, Incentives, and Drugs for Smoking Cessation , 2018, The New England journal of medicine.

[22]  M. Hogan,et al.  A randomised active-controlled trial to examine the effects of an online mindfulness intervention on executive control, critical thinking and key thinking dispositions in a university student sample , 2018, BMC psychology.

[23]  Chris Callison-Burch,et al.  A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk , 2017, CHI.

[24]  Kenneth A Getz,et al.  New Benchmarks Characterizing Growth in Protocol Design Complexity , 2018, Therapeutic innovation & regulatory science.

[25]  Peter Hellyer,et al.  A pilot investigation of the physical and psychological benefits of playing Pokémon GO for dog owners , 2017, Comput. Hum. Behav..

[26]  D. Alter,et al.  Uptake of an Incentive-Based mHealth App: Process Evaluation of the Carrot Rewards App , 2017, JMIR mHealth and uHealth.

[27]  Camarin E. Rolle,et al.  A pilot study to determine the feasibility of enhancing cognitive abilities in children with sensory processing dysfunction , 2017, PloS one.

[28]  Daniel Hind,et al.  Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme , 2017, BMJ Open.

[29]  David K. Lohrmann,et al.  Effects of preventive online mindfulness interventions on stress and mindfulness: A meta-analysis of randomized controlled trials , 2016, Preventive medicine reports.

[30]  Iñaki Soto Rey,et al.  Automated Transformation of CDISC ODM to OpenClinica , 2017, GMDS.

[31]  R. Califf,et al.  Real-World Evidence - What Is It and What Can It Tell Us? , 2016, The New England journal of medicine.

[32]  Nima Nikzad,et al.  Self-Monitoring Utilization Patterns Among Individuals in an Incentivized Program for Healthy Behaviors , 2016, Journal of medical Internet research.

[33]  HP Selker,et al.  PIPELINEs: Creating Comparable Clinical Knowledge Efficiently by Linking Trial Platforms , 2016, Clinical pharmacology and therapeutics.

[34]  H. Sox,et al.  Pragmatic Trials: Practical Answers to "Real World" Questions. , 2016, JAMA.

[35]  Walid Maalej,et al.  On the automatic classification of app reviews , 2016, Requirements Engineering.

[36]  S. Koike,et al.  Wearing blue light-blocking glasses in the evening advances circadian rhythms in the patients with delayed sleep phase disorder: An open-label trial , 2016, Chronobiology international.

[37]  J. Greene,et al.  Assessing the Gold Standard--Lessons from the History of RCTs. , 2016, The New England journal of medicine.

[38]  R. W. Hansen,et al.  Journal of Health Economics , 2016 .

[39]  S. Friend,et al.  The mPower study, Parkinson disease mobile data collected using ResearchKit , 2016, Scientific Data.

[40]  Jaeho Lee,et al.  Developing a Framework for Evaluating the Patient Engagement, Quality, and Safety of Mobile Health Applications. , 2016, Issue brief.

[41]  E. Elenko,et al.  A regulatory framework emerges for digital medicine , 2015, Nature Biotechnology.

[42]  J. Gordon,et al.  Online Recruitment Methods for Web-Based and Mobile Health Studies: A Review of the Literature , 2015, Journal of medical Internet research.

[43]  Jason L. Huang,et al.  Detecting Insufficient Effort Responding with an Infrequency Scale: Evaluating Validity and Participant Reactions , 2014, Journal of Business and Psychology.

[44]  Jan-Willem Boiten,et al.  OpenClinica , 2015, Journal of Clinical Bioinformatics.

[45]  Martin F Mendiola,et al.  Valuable Features in Mobile Health Apps for Patients and Consumers: Content Analysis of Apps and User Ratings , 2015, JMIR mHealth and uHealth.

[46]  A. Apter Understanding adherence requires pragmatic trials: lessons from pediatric asthma. , 2015, JAMA pediatrics.

[47]  Luohua Jiang,et al.  Long-Term Outcomes of a Web-Based Diabetes Prevention Program: 2-Year Results of a Single-Arm Longitudinal Study , 2015, Journal of medical Internet research.

[48]  Oksana Zelenko,et al.  Mobile App Rating Scale: A New Tool for Assessing the Quality of Health Mobile Apps , 2015, JMIR mHealth and uHealth.

[49]  Marc Buyse,et al.  Data fraud in clinical trials. , 2015, Clinical investigation.

[50]  Jesse J. Chandler,et al.  Inside the Turk , 2014 .

[51]  Matthew D. Galsky,et al.  Use of crowdsourcing for cancer clinical trial development. , 2014 .

[52]  Quoc V. Le,et al.  Distributed Representations of Sentences and Documents , 2014, ICML.

[53]  Jettie Hoonhout,et al.  Design and baseline characteristics of the Food4Me study: a web-based randomised controlled trial of personalised nutrition in seven European countries , 2014, Genes & Nutrition.

[54]  D. Rennie,et al.  SPIRIT 2013 statement: defining standard protocol items for clinical trials. , 2013, Annals of internal medicine.

[55]  A. Šimundić Bias in research , 2013, Biochemia medica.

[56]  A. Meade,et al.  Identifying careless responses in survey data. , 2012, Psychological methods.

[57]  Elizabeth M. Poposki,et al.  Detecting and Deterring Insufficient Effort Responding to Surveys , 2012 .

[58]  Arjen P. de Vries,et al.  Increasing cheat robustness of crowdsourcing tasks , 2013, Information Retrieval.

[59]  Brigitte Walther,et al.  Comparison of Electronic Data Capture (EDC) with the Standard Data Capture Method for Clinical Trial Data , 2011, PloS one.

[60]  Michael D. Buhrmester,et al.  Amazon's Mechanical Turk , 2011, Perspectives on psychological science : a journal of the Association for Psychological Science.

[61]  Daniel J. Veit,et al.  More than fun and money. Worker Motivation in Crowdsourcing - A Study on Mechanical Turk , 2011, AMCIS.

[62]  C. Pannucci,et al.  Identifying and Avoiding Bias in Research , 2010, Plastic and reconstructive surgery.

[63]  Jelke Bethlehem,et al.  Selection Bias in Web Surveys , 2010 .

[64]  J. Frost,et al.  Sharing Health Data for Better Outcomes on PatientsLikeMe , 2010, Journal of medical Internet research.

[65]  R. McGrath,et al.  Evidence for response bias as a source of error variance in applied assessment. , 2010, Psychological bulletin.

[66]  孙杰贤,et al.  “寂寞”的App Store , 2010 .

[67]  Ian Harvey,et al.  A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers , 2009, Canadian Medical Association Journal.

[68]  Aniket Kittur,et al.  Crowdsourcing user studies with Mechanical Turk , 2008, CHI.

[69]  P. Costa,et al.  The revised NEO personality inventory (NEO-PI-R) , 2008 .

[70]  L. F. Barrett,et al.  BEING EMOTIONAL DURING DECISION MAKING-GOOD OR BAD? AN EMPIRICAL INVESTIGATION. , 2007, Academy of Management journal. Academy of Management.

[71]  Alla Keselman,et al.  Registering a clinical trial in ClinicalTrials.gov. , 2007, Chest.

[72]  P. Rothwell,et al.  Factors That Can Affect the External Validity of Randomised Controlled Trials , 2006, PLoS clinical trials.

[73]  Sang Joon Kim,et al.  A Mathematical Theory of Communication , 2006 .

[74]  Ross Tonkens,et al.  An overview of the drug development process. , 2005, Physician executive.

[75]  Steven L. Wise,et al.  Response Time Effort: A New Measure of Examinee Motivation in Computer-Based Tests , 2005 .

[76]  John A. Johnson Ascertaining the validity of individual protocols from Web-based personality inventories. , 2005 .

[77]  D. Paulhus,et al.  The over-claiming technique: measuring self-enhancement independent of ability. , 2003, Journal of personality and social psychology.

[78]  Limsoon Wong,et al.  Accomplishments and challenges in literature data mining for biology , 2002, Bioinform..

[79]  R. Jamison,et al.  Electronic diaries for monitoring chronic pain: 1-year validation study , 2001, Pain.

[80]  L. Kruger,et al.  Getting to Know Ourselves and Our Places Through Participation in Civic Social Assessment , 2000 .

[81]  Dorian Pyle,et al.  Data Preparation for Data Mining , 1999 .

[82]  D J Torgerson,et al.  Pragmatic trials: lab meets bedside , 2019, The British journal of dermatology.

[83]  J. S. Katz,et al.  What is research collaboration , 1997 .

[84]  K. McGraw,et al.  A common language effect size statistic. , 1992 .

[85]  S. Shapiro,et al.  An Analysis of Variance Test for Normality (Complete Samples) , 1965 .

[86]  W. Hoeffding,et al.  Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling. , 1962 .

[87]  F. Wilcoxon Individual Comparisons by Ranking Methods , 1945 .

[88]  F. Galton Regression Towards Mediocrity in Hereditary Stature. , 1886 .