Context and Approach in Reporting Evaluations of Electronic Health Record–Based Implementation Projects

Electronic health records (EHRs) are ubiquitous yet still evolving, resulting in a moving target for determining the effects of context (features of the work environment, such as organization, payment systems, user training, and roles) on EHR implementation projects. Electronic health records have become instrumental in effecting quality improvement innovations and providing data to evaluate them. However, reports of studies typically fail to provide adequate descriptions of contextual details to permit readers to apply the findings. As for any evaluation, the quality of reporting is essential to learning from, and disseminating, the results. Extensive guidelines exist for reporting of virtually all types of applied health research, but they are not tailored to capture some contextual factors that may affect the outcomes of EHR implementations, such as attitudes toward implementation, format and amount of training, post go-live support, amount of local customization, and time diverted from direct interaction with patients to computers. Nevertheless, evaluators of EHR-based innovations can choose reporting guidelines that match the general purpose of their evaluation and the stage of their investigation (planning, protocol, execution, and analysis) and should report relevant contextual details (including, if pertinent, any pressures to help justify the huge investments and many years required for some implementations). Reporting guidelines are based on the scientific principles and practices that underlie sound research and should be consulted from the earliest stages of planning evaluations and onward, serving as guides for how evaluations should be conducted as well as reported.

[1]  Julie E. Reed,et al.  A new typology for understanding context: qualitative exploration of the model for understanding success in quality (MUSIQ) , 2018, BMC Health Services Research.

[2]  Maarten van Smeden,et al.  Adjustment for unmeasured confounding through informative priors for the confounder-outcome relation , 2018, BMC Medical Research Methodology.

[3]  David W. Bates,et al.  Clinical decision support alert malfunctions: analysis and empirically derived taxonomy , 2017, J. Am. Medical Informatics Assoc..

[4]  I. Olkin,et al.  Improving the quality of reporting of randomized controlled trials. The CONSORT statement. , 1996, JAMA.

[5]  R Brian Haynes,et al.  Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials , 2013, BMJ : British Medical Journal.

[6]  Patrick B. Ryan,et al.  Validation of a common data model for active safety surveillance research , 2012, J. Am. Medical Informatics Assoc..

[7]  Guilherme Del Fiol,et al.  Development and classification of a robust inventory of near real-time outcome measurements for assessing information technology interventions in health care , 2017, J. Biomed. Informatics.

[8]  D. Bates,et al.  Improving Electronic Health Record Usability and Safety Requires Transparency. , 2018, JAMA.

[9]  Greg Ogrinc,et al.  Squire 2.0 (Standards for Quality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. , 2015, American journal of critical care : an official publication, American Association of Critical-Care Nurses.

[10]  Richard Platt,et al.  Launching PCORnet, a national patient-centered clinical research network , 2014, Journal of the American Medical Informatics Association : JAMIA.

[11]  Published Online Biomedical research: increasing value, reducing waste , 2014 .

[12]  Spencer S. Jones,et al.  Health Information Technology: An Updated Systematic Review With a Focus on Meaningful Use , 2014, Annals of Internal Medicine.

[13]  Spencer S Jones,et al.  The value of health information technology: filling the knowledge gap. , 2014, The American journal of managed care.

[14]  J. Wyatt,et al.  Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide , 2014, BMJ : British Medical Journal.

[15]  J. Lowery,et al.  Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science , 2009, Implementation science : IS.

[16]  George Hripcsak,et al.  High-fidelity phenotyping: richness and freedom from bias , 2017, J. Am. Medical Informatics Assoc..

[17]  R. Glasgow,et al.  The RE-AIM framework for evaluating interventions: what can it tell us about approaches to chronic illness management? , 2001, Patient education and counseling.

[18]  M. Klein,et al.  A New Method for Partial Correction of Residual Confounding in Time-Series and Other Observational Studies , 2015, American journal of epidemiology.

[19]  Elizabeth Murray,et al.  Standards for Reporting Implementation Studies (StaRI) Statement , 2017, British Medical Journal.