A preventable medical error caused my grandmother's death. Her physician's office was a sea of paper, without automated workflow tools, decision support, or electronic documentation. Would electronic health records (EHRs) and e-prescribing have warned her physician that high-dose nonsteroidal anti-inflammatory drugs might induce a gastrointestinal bleeding episode in an octogenarian receiving prednisone? Very likely. Do we have proof from randomized clinical trials done in rural clinician offices? We do not. In this issue, Chaudhry and colleagues (1) note that 25% of the health information technology (HIT) efficacy literature is from 4 institutions, each of which has developed its own EHR system. These 4 sites have convincingly documented the positive effects of HIT on quality, efficiency, and costs. It has 1) increased delivery of care in adherence with guidelines and protocols, 2) enhanced capacity to perform surveillance and monitoring for disease conditions, 3) reduced rates of medication errors, 4) decreased utilization of care, and 5) had mixed effects on physicians' time utilization. In view of the limited evidence, Chaudhry and colleagues conclude that many stakeholders who are seeking to make information technology investments will struggle to replicate the experience of these 4 sites in typical health care facilities using commercial software packages. I accept the authors' conclusion that we have incomplete evidence of the effect of many forms of clinical automation in many settings. However, I disagree with their conclusion that the use of commercial products is inherently different from that of self-developed applications, as described in the literature. In my experience, the culture of an institution and the quality of the software implementation affect the successful adoption of clinical automation more than whether the product was self-built or purchased. Furthermore, many commercial products have been acquired from or inspired by the self-developed sites described in the literature. Given an evidence base that describes the experience of only 4 institutions, what explains the national sense of urgency to implement EHRs, pay-for-performance incentives for e-prescribing, and quality imperatives? I believe that the 4 institutions referenced in the article have demonstrated that the right combination of technology and institutional culture can lead to important gains in quality and value. The United States needs these gains so desperately that it is willing to bet on EHRs despite the limited scope of the evidence. Several applications seem so likely to improve the quality and effectiveness of care that we should use them now. Electronic Health Records and E-Prescribing Systems Any physician or nurse can see the potential for mistakes in the process of manual ordering and prescribing. Handwritten orders and prescriptions are hard to read. Given the complexity of interaction rules and the continuous introduction of new medications, drugdrug interactions and drugallergy interactions are hard to avoid without the information built into decision-support systems in most EHRs. Although automation may cause new errors (2), a workflow that automatically reconciles a patient's medication history with newly prescribed medications, identifies potential interactions, and facilitates the accurate delivery of inpatient orders and outpatient prescriptions to patients is difficult to argue against. Clinical Data Sharing In general, clinical computer systems do not effectively communicate with each other among different institutions, resulting in redundant orders for the same patient by different clinicians. In Massachusetts, approximately 15% of diagnostic tests are repeated at an estimated cost of $4.5 billion per year (3). If exchange standards for health care data are harmonized and an architecture for clinical data exchange is widely used, clinicians and patients will have access to longitudinal health records across all care settingsinpatient, outpatient, and emergency department. Such exchange of electronic health care data will enable patients to become stewards of their own data, laying the foundation for them to maintain their own personal health records. Coordination among caregivers and collaboration with patients seem very likely to improve care quality and efficacy. Ample experience from the Veterans Administration about quality improvement resulting from clinical data sharing across sites of care should be enough motivation for us to take the leap of faith that clinical data sharing is a worthwhile effort (4). Biosurveillance and Quality Measurement Paper-based charts may or may not suffice for retrieving information about an individual patient, but they are of very little value when asking population-level questions (such as How many patients with myocardial infarction are receiving -blockers and a daily aspirin? or How many patients have recently had a chief symptom of cough and fever after visiting Southeast Asia?). Electronic repositories of data empower quality measurement, biosurveillance, and clinical research. They also enable us to notify patients when product recalls occur, such as the recent action by the U.S. Food and Drug Administration for Vioxx (Merck & Co., Whitehouse Station, New Jersey). As these examples indicate, decision makers may not and should not need controlled clinical trials for every application in every setting. Other countries, such as the United Kingdom, Sweden, and Canada, have widely implemented HIT despite the limitations of the evidence. Other industries, such as airlines, have made investments in safety without proof of effectiveness. Face validity is sometimes enough. For example, few people would want to participate in a placebo-controlled, randomized trial of the efficacy of parachutes against gravity (5). I believe that proof of effectiveness in some settings is enough evidence to proceed with widespread implementation of the EHR. If we proceed, what additional informatics research should be done? We should direct future funding to 2 areas: 1) evaluation of new and emerging HIT innovations to guide our implementation priorities and 2) evaluation of how to extend adoption of these technologies to the digital divide, settings that are typically under-resourced to implement HIT, such as small clinician offices, community health centers, and other safety net providers. Published accounts of the experience of leading centers are valuable, but we must learn more about what happens when we redesign processes of care as technology is implemented in other settings (6). No institution should have to rediscover an avoidable problem. The Agency for Healthcare Research and Quality has recently invested substantially in research on these topics, although much more needs to be done. In my opinion, HIT implementation in the United States is comparable to the transition from the horse and buggy to the automobile. The experience in the first decade of the 20th century looked positive. Building automobile-manufacturing plants involved substantial risks, but they seemed acceptable to investors. The potential benefits of HIT are huge, although we need better roads. While some accidents and some lemons are inevitable, HIT may transform health care.
[1]
Johnathan B Perlin,et al.
The Veterans Health Administration: quality, value, accountability, and information as transforming strategies for patient-centered care.
,
2005,
HealthcarePapers.
[2]
P. Shekelle,et al.
Systematic Review: Impact of Health Information Technology on Quality, Efficiency, and Costs of Medical Care
,
2006,
Annals of Internal Medicine.
[3]
A. Localio,et al.
Role of computerized physician order entry systems in facilitating medication errors.
,
2005,
JAMA.
[4]
R. Baron,et al.
Electronic Health Records: Just around the Corner? Or over the Cliff?
,
2005,
Annals of Internal Medicine.