Efficacy and cost-effectiveness of an automated screening algorithm in an inpatient clinical trial

Introduction Screening and recruitment for clinical trials can be costly and time-consuming. Inpatient trials present additional challenges because enrollment is time sensitive based on length of stay. We hypothesized that using an automated prescreening algorithm to identify eligible subjects would increase screening efficiency and enrollment and be cost-effective compared to manual review of a daily admission list. Methods Using a before-and-after design, we compared time spent screening, number of patients screened, enrollment rate, and cost-effectiveness of each screening method in an inpatient diabetes trial conducted at Massachusetts General Hospital. Manual chart review (CR) involved reviewing a daily list of admitted patients to identify eligible subjects. The automated prescreening (APS) method used an algorithm to generate a daily list of patients with glucose levels ≥ 180 mg/dL, an insulin order, and/or admission diagnosis of diabetes mellitus. The census generated was then manually screened to confirm eligibility and eliminate patients who met our exclusion criteria. We determined rates of screening and enrollment and cost-effectiveness of each method based on study sample size. Results Total screening time (prescreening and screening) decreased from 4 to 2 h, allowing subjects to be approached earlier in the course of the hospital stay. The average number of patients prescreened per day increased from 13 ± 4 to 30 ± 16 (P < 0.0001). Rate of enrollment increased from 0.17 to 0.32 patients per screening day. Developing the computer algorithm added a fixed cost of US$3000 to the study. Based on our screening and enrollment rates, the algorithm was cost-neutral after enrolling 12 patients. Larger sample sizes further favored screening with an algorithm. By contrast, higher recruitment rates favored individual CR. Limitations Because of the before-and-after design of this study, it is possible that unmeasured factors contributed to increased enrollment. Conclusion Using a computer algorithm to identify eligible patients for a clinical trial in the inpatient setting increased the number of patients screened and enrolled, decreased the time required to enroll them, and was less expensive. Upfront investment in developing a computerized algorithm to improve screening may be cost-effective even for relatively small trials, especially when the recruitment rate is expected to be low.

[1]  Martin Dugas,et al.  Estimation of Patient Accrual Rates in Clinical Trials Based on Routine Data from Hospital Information Systems , 2009, Methods of Information in Medicine.

[2]  R. Hornung,et al.  Effect of a clinical trial alert system on physician participation in trial recruitment. , 2005, Archives of internal medicine.

[3]  Atul J Butte,et al.  Computerized recruiting for clinical trials in real time. , 2003, Annals of emergency medicine.

[4]  Atul J. Butte,et al.  Enrolling patients into clinical trials faster using RealTime Recuiting , 2000, AMIA.

[5]  P. Lavori,et al.  A controlled trial of inpatient and outpatient geriatric evaluation and management. , 2002, The New England journal of medicine.

[6]  Using the Internet to recruit patients for epilepsy trials: Results of a New Zealand pilot study , 2010, Epilepsia.

[7]  L. Hooft,et al.  IMproving PArticipation of patients in Clinical Trials - rationale and design of IMPACT , 2010, BMC medical research methodology.

[8]  Flory Nkoy,et al.  Enhancing an Existing Clinical Information System to Improve Study Recruitment and Census Gathering Efficiency , 2009, AMIA.

[9]  Martin Dugas,et al.  Workflow to improve patient recruitment for clinical trials within hospital information systems – a case-study , 2008, Trials.

[10]  D Elbourne,et al.  Recruitment to randomised trials: strategies for trial enrollment and participation study. The STEPS study. , 2007, Health technology assessment.

[11]  J. Dyas,et al.  Strategies for improving patient recruitment to focus groups in primary care: a case study reflective paper using an analytical framework , 2009, BMC medical research methodology.

[12]  M. Page,et al.  Recruitment difficulties in a primary care cluster randomised trial: investigating factors contributing to general practitioners' recruitment of patients , 2011, BMC medical research methodology.

[13]  P. Kirchhof,et al.  Routine data from hospital information systems can support patient recruitment for clinical studies , 2010, Clinical trials.

[14]  A. Street,et al.  Improving patient recruitment to multicentre clinical trials: the case for employing a data manager in a district general hospital-based oncology centre. , 2001, Clinical oncology (Royal College of Radiologists (Great Britain)).

[15]  David Glasspool,et al.  Comparing semi-automatic systems for recruitment of patients to clinical trials , 2011, Int. J. Medical Informatics.

[16]  D. Singer,et al.  Bupropion for smokers hospitalized with acute cardiovascular disease. , 2006, The American journal of medicine.

[17]  M. Kotowicz,et al.  Trials and tribulations of recruiting 2,000 older women onto a clinical trial investigating falls and fractures: Vital D study , 2009 .

[18]  Peter J. Embi,et al.  Development of an Electronic Health Record-based Clinical Trial Alert System to Enhance Recruitment at the Point of Care , 2005, AMIA.

[19]  Ronan A Lyons,et al.  The Health Informatics Trial Enhancement Project (HITE): Using routinely collected primary care data to identify potential participants for a depression trial , 2010, Trials.

[20]  C. Weel,et al.  The use of routinely collected computer data for research in primary care: opportunities and challenges. , 2006, Family practice.

[21]  Stephanie Heinemann,et al.  A clinical trial alert tool to recruit large patient samples and assess selection bias in general practice research , 2011, BMC medical research methodology.

[22]  Yugyung Lee,et al.  MindTrial: An Intelligent System for Clinical Trials. , 2010, AMIA ... Annual Symposium proceedings. AMIA Symposium.