Once we agree-or concede-that intense, unscripted competition is now reforming the U.S. health care system, the pressing need for information becomes self-evident. Few sentient beings would argue the first point. In some areas, competition has rocked and upended health care landscapes that formerly shifted only with the grinding reluctance of tectonic plates. The second point follows: Information is necessary for making informed choices among competitors. To date, however, costs have been easier to quantify than quality, so dollars have dominated decision making. When driven purely by price, competition is antithetical to the fundamental mission of health care. Information, therefore, is also essential to ensure that quality enters the competitive equation. The rhetoric of leading health care competitors asserts their interest in value, an amalgam of cost and quality. Nonetheless, explicit quantification of value remains elusive. Experts legitimately claim that rigorous measurement of health care quality is now possible [1, 2]. However, obtaining good information about anything-such as putative health care quality-requires money and time. Designing meaningful quality measures and measurement tools, obtaining necessary data from medical records or patients, ensuring minimal reliability and accuracy of data, doing credible statistical analyses, and other necessary tasks are expensive and time-consuming. How much are we willing to pay to learn about health care quality? Thus far, most U.S. initiatives to examine hospital quality have begun on a small scale, using readily available, computer-readable hospital discharge abstracts to examine mortality rates [3, 4]. Discharge abstracts, typically produced through hospital billing, include such items as patient demographic characteristics, payer, dates, discharge disposition, and diagnosis and procedure codes. Computer files that contain discharge abstracts and encompass all hospitalizations statewide can be purchased from some state health data organizations for approximately $1000. This broad coverage, easy access, and low cost contrast favorably with the high costs and logistical challenges of obtaining clinical information from medical records. 1990, for example, when California considered imposing new requirements for reporting clinical data for evaluating hospital quality, the estimated annual cost of more than $61 million sent state lawmakers into sticker shock [5]. Instead, California adopted a method based on their already available database of discharge abstracts. Despite their pecuniary attractions, discharge abstracts offer only limited insight into patients' clinical conditions. Meaningful comparisons of mortality rates across hospitals generally must control for patient risk, but risk adjustment using discharge diagnosis codes and other administrative data is problematic. In 1993, the Health Care Financing Administration abandoned its comparisons of hospital mortality rates for Medicare beneficiaries largely because of concerns about the inability of administrative data to adequately adjust for patient risk [6]. Therefore, the model described by Pine and colleagues in this issue [7] offers a welcome alternative. They found that linking existing electronic data from laboratory reporting systems to administrative files could provide a cost-effective way to add valuable clinical information. As have others [8, 9], Pine and colleagues showed that even a few clinical variables can contribute substantially to predicting in-hospital deaths of general, acute care patients. Using electronic laboratory data to predict patient outcomes is not a new idea [10-13]. At a university hospital, McMahon and colleagues [10] created APACHE-L (a version of the Acute Physiology and Chronic Health Evaluation severity measure) by using data extracted from the laboratory computer. Using values of 20 laboratory tests, they showed that APACHE-L added substantially to predictions of length of stay and use of ancillary resources for selected conditions. Similarly, Mozes and colleagues [11] used values of 22 laboratory tests obtained from electronic laboratory reporting systems at two large teaching hospitals to construct predictors of length of stay. Davis and colleagues [13] predicted in-hospital deaths by using electronic data taken not only from the laboratory system but also from nurses' assessments of function, which are entered into the computer during each shift at a major teaching hospital [13]. They found that the nurses' assessments (for example, whether the patient required assistance with bathing) were more powerful predictors of death than were most laboratory variables. Despite these encouraging experiences, it is not accidental that they all occurred at large facilities. An obvious practical question involves the feasibility of extracting comparable electronic laboratory data across facilities, large and small. Although most hospitals are moving toward computerizing their information systems, such as laboratory reporting, this process is expensive and by no means universal. In many instances, only islands of automation exist; digital health information remains cloistered within departments or small organizational units [14]. Although standards for transmission of electronic data are now being promulgated by such groups as the American Society for Testing and Materials and the American National Standards Institute, getting computers from different departments to talk to each other remains a frustratingly intractable problem, even in institutions with extensive information systems. For example, Davis and colleagues found that laboratory data and nurses' assessments were archived on separate computer systems that could not communicate; the investigators downloaded the desired data from each computer and then merged them elsewhere to create the analytical database. Although Pine and colleagues tout the utility of electronic laboratory information, it is ironic, but telling, that they used data abstracted manually from medical records. Other hurdles could also impede widespread use of detailed digital data about patients. Protecting confidentiality and privacy raises thorny ethical and technical questions. For example, patients about to have surgery undergo outpatient testing before admission. Laboratory values must therefore be obtained from diverse sites to produce complete patient profiles. Nevertheless, the United States is moving toward a time when data on health and health care will be widely available electronically. Several provisions of the Health Insurance Portability and Accountability Act of 1996 (the Kassebaum-Kennedy bill), signed by President Clinton on 21 August 1996, give the federal government a leading role in determining standards for data transmission, mechanisms for protecting privacy, and ways of coding specified data elements. The law requires that the National Committee on Vital and Health Statistics study the issues related to the adoption of uniform data standards for patient medical record information and the electronic exchange of such information and report back within 4 years about recommendations and legislative proposals for such standards and electronic exchange. In the meantime, we cannot wait for good information about quality. We must demand it and be willing to pay, but how much? Projecting these costs is difficult. Since 1986, Pennsylvania has required hospitals to collect extensive clinical data that will be used to adjust mortality rates for risk [15, 16]. One study estimated that these data cost $17.43 per case, with total annual costs ranging from $70 000 at small rural facilities to $134 000 at large urban hospitals [17]. However, these considerable sums represent only 0.36% of total expenditures at small hospitals and 0.27% at large ones [17]. Pennsylvania employers claim to have saved millions by using these data to negotiate with hospitals. Nonetheless, a lingering question is whether knowing hospital mortality rates alone is worth this price. The evidence about the relation between mortality rates and hospital quality is mixed. Even if perfect, hospital mortality rates represent the tip of the iceberg in our need for information about health care quality [2, 18]. In addition, information on outcomes of hospitalizations tells us only about persons in the health care system, not those outside the system. The obvious risk is that we will be swimming in costly information but will be unable to determine what it all means. T.S. Eliot anticipated this problem in the first chorus of The Rock: Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information? If this happens, dollars will continue to drive competition in our health care system.
[1]
L. Iezzoni,et al.
The role of severity information in health policy debates: a survey of state and regional concerns.
,
1991,
Inquiry : a journal of medical care organization, provision and financing.
[2]
A. Localio,et al.
Comparing Hospital Mortality in Adult Patients with Pneumonia: A Case Study of Statistical Methods in a Managed Care Program
,
1995,
Annals of Internal Medicine.
[3]
L. McMahon,et al.
APACHE-L: A New Severity of Illness Adjuster for Inpatient Medical Care
,
1992,
Medical care.
[4]
E. McGlynn,et al.
Health system reform and quality.
,
1996,
JAMA.
[5]
L. Iezzoni,et al.
Widespread assessment of risk-adjusted outcomes: lessons from local initiatives.
,
1994,
The Joint Commission journal on quality improvement.
[6]
A M Epstein,et al.
Influence of cardiac-surgery performance reports on referral practices and access to care. A survey of cardiovascular specialists.
,
1996,
The New England journal of medicine.
[7]
C Safran,et al.
Predicting In‐Hospital Mortality The Importance of Functional Status Information
,
1995,
Medical care.
[8]
G. Coffman,et al.
Predicting In-Hospital Mortality: A Comparison of Severity Measurement Approaches
,
1992,
Medical care.
[9]
D Draper,et al.
Predicting hospital-associated mortality for Medicare patients. A method for patients with stroke, pneumonia, acute myocardial infarction, and congestive heart failure.
,
1988,
JAMA.
[10]
L. Sheiner,et al.
Case-mix adjustment using objective measures of severity: the case for laboratory data.
,
1994,
Health services research.
[11]
L. Sheiner,et al.
Improving the homogeneity of diagnosis-related groups (DRGs) by using clinical laboratory, demographic, and discharge data.
,
1989,
American journal of public health.
[12]
H. Luft,et al.
The California Hospital Outcomes Project: using administrative data to compare hospital performance.
,
1995,
The Joint Commission journal on quality improvement.
[13]
J. Hibbard,et al.
What Type of Quality Information do Consumers Want in a Health Care Report Card?
,
1996,
Medical care research and review : MCRR.
[14]
Bruce L. Miller,et al.
Office of Technology Assessment Task Force
,
1991
.
[15]
M Pine,et al.
Predictions of Hospital Mortality Rates: A Comparison of Data Sources
,
1997,
Annals of Internal Medicine.
[16]
R H Brook,et al.
Quality of health care. Part 2: measuring quality of care.
,
1997,
The New England journal of medicine.