One of the main problems in quality improvement of medical care is the identification of care that is substandard. Various methods have been proposed or are now being tested, including the assessment of outcome measures such as mortality rates [1, 2], unexpected changes in patient health status [3], and patient reports of satisfaction [4]. The primary method of uncovering poor quality care, however, is the use of clinical screening criteria to develop a subset of cases that are then subjected to structured, implicit review by physicians. This method was used by several of the leading comprehensive research programs on quality that were published in the last 5 years [5, 6]. A variant of this approach is used by the federal government's Peer Review Organizations for the Medicare program [7]. Screening of medical records followed by structured review is likely to remain part of any comprehensive quality program. Little information exists on the accuracy of medical records as the basis for quality-improvement efforts. Although we have previously shown that most examples of malpractice can be found in medical records [8], nagging doubts remain that evidence of poor quality may not be documented in the record by providers. Retrospective record review also tends to make providers feel as though they are being investigated and may reduce physician interest in quality assurance. Finally, review of records may be expensive [9]. A different method of gathering data for quality-improvement purposes is to ask physicians and other providers to concurrently identify patients who received substandard care. A physician reporting system obviates the weakness of relying on the medical record. It can also be more easily integrated into a philosophy of quality improvement, which many have argued is essential for real change [10, 11]. Time-of-event identification of problems offers greater potential for mitigating them. In addition, the identification of quality problems as they occur may be less expensive than retrospective record review. Of course, given physicians' negative attitudes toward quality-improvement efforts, many might doubt the feasibility of instituting an effective physician reporting system. Because of the theoretical advantages of physician identification of quality problems, we compared physician reporting of adverse events by the housestaff on the medical service of a teaching hospital with a retrospective two-step screening of the medical record for adverse events followed by implicit physician judgment about identified events. We determined the relative efficacy of the physician reporting versus the chart-review screening processes and determined whether the two processes identified the same adverse events. We also compared both of these approaches to the traditional quality-assurance devices used at this hospital, which are similar to those of other hospitals. We report the results of our comparisons and comment on their implications for new trends in quality improvement. Methods Sample We studied all 3146 admissions to the medical service of the Brigham and Women's Hospital for the period 13 November 1990 to 14 March 1991. We were unable to obtain medical records for five admissions. Therefore, the following description is based on the review of 3141 admissions. Review Techniques As in previous studies, we used the adverse event as the outcome measure. An adverse event was defined as an injury that prolongs the hospital stay or leads to disability at the time of discharge, which is caused by inappropriate medical management instead of the disease process. We also identified the subset of adverse events that were preventable. Two parallel processes or strategies were used to identify adverse events: a medical-record review strategy (strategy 1) and a physician reporting strategy (strategy 2). In addition, we compared the combined strategies with the quality-of-care data routinely collected by the hospital. Strategy 1: Medical-Record Review The first strategy entailed a review of the medical record. All records were initially reviewed by medical-record analysts using a list of screening criteria similar to the one used in the Harvard Medical Practice Study [12]. The 10 medical-record analysts, all fourth-year students enrolled in the Health Information Administration Program at Northeastern University, were familiar with medical-record coding, terminology, and management. All the analysts read a training manual that summarized the chart-review portion of the study and that contained examples of how each of the 15 screening criteria should be evaluated in the context of the study protocol. These screens were designed to signal events in patient care that might be associated with an undesirable outcome that resulted from medical management. We have previously discussed the reliability and validity of these screening criteria [13]. In addition to the 15 screening criteria, medical-record analysts were instructed to collect demographic characteristics such as date of birth, sex, and race. All charts from patients for whom at least 1 of the 15 study criteria was positive were referred for physician review. Eighteen senior medical residents from another Boston teaching hospital conducted an independent review of the medical records that were screened positive by medical-record analysts. Using the Preventable Medical Injury Form as a guide, physicians reviewed the medical admission for potential adverse events. The Preventable Medical Injury Form focuses the implicit judgments of reviewing physicians on the critical issue of presence of an adverse event (medical causation). Each reviewer was trained in the use of this form during a 2-hour session and was given a training manual that contained directions and that detailed examples of confirmed case-patients. In the analysis of a case-patient who had positive screening criteria, the physician-reviewers first made a determination about medical management causation. Next, they were asked to classify the event into three categories: 1) procedure-related injury; 2) medication-related injury; and 3) other therapy-related injuries. After the categorization of the injury, there were questions about where the event occurred and the specialty responsible for the injury. The extent and nature of the patient's disability were then assessed. If more than one medical injury occurred during a given admission, physician-reviewers were instructed to summarize only the most disabling or the injury causing the longest additional length of stay. Only one adverse event per patient was counted. To evaluate the inter-rater reliability of the critical judgments of causation by physician-reviewers, we randomly selected a 10% sample of the medical records that were screened positive by medical-record analysts. Physician-reviewers, blinded to the results of the first screening, were asked to re-review the sample. Strategy 2: Physician Reporting The second identification strategy involved the confidential reporting of potential injuries by the medical housestaff. Thirteen medical teams were responsible for reporting events at any one time. The hospital has a heavily used electronic mail system available on 2000 terminals in all areas of the hospital. Senior or junior residents heading each of these teams were reminded daily, using electronic mail, to report events. Case-patient summaries and relevant study elements could be reported to the study team by electronic mail or could be written and left in a study box that was checked daily. Most participating residents elected to report cases using electronic mail. The reporting focused on any patient under the team's care; if the patient was being crosscovered by a physician from another team, the original team was alerted to the adverse event by the crosscovering physician. Thus, the reporting was physician self-reporting, not reporting on the adverse events of others. During weekly resident meetings, the entire medical housestaff were reminded of the importance of reporting potential injuries. Medical interns were not asked to report potential injuries. Each case-patient for whom an adverse event was reported by the housestaff was then matched to two controls, based on bed occupancy at the time of event. Patients occupying the beds adjacent to the case-patient at the time of the event were selected as controls. The casecontrol portion of this investigation has been defined in our laboratory [Petersen LA and colleagues. Unpublished data]. After daily reports of potential injuries, resident physicians reviewed the charts of case-patients to determine whether an adverse event had occurred. They were trained in the definition of an adverse event in a manner similar to the physicians who participated in the medical-record review strategy. The type of event was again classified into three categories: 1) procedure-related injury; 2) medication-related injury; and 3) other therapy-related injuries. The location of the event was also noted. Generally, cases were reported within 24 hours of occurrence, and pertinent data elements were completed within 48 hours. Preventability of Adverse Events For all adverse events identified by both strategies, judgments of preventability were later made by two senior physician investigators, blinded to all the data elements except detailed, clinical case summaries. The reviewers, working independently, made an initial judgment of preventability on a six-point scale, ranging from little or no evidence of preventability [1] to virtually certain evidence of preventability [6]. For cases in which reviewers differed by more than two points, they met and reached a consensus. For the analysis, we collapsed the scale into dichotomous outcomes (preventable, not preventable). Those case-patients with scores of four or more were considered preventable. Costs of Review Process Cost estimates were calculated by multiplying the numb
[1]
R H Brook,et al.
Watching the doctor-watchers. How well do peer review organization methods detect hospital care quality problems?
,
1992,
JAMA.
[2]
D Blumenthal,et al.
The case for using industrial quality management science in health care organizations.
,
1989,
JAMA.
[3]
T A Brennan,et al.
Reliability and Validity of Judgments Concerning Adverse Events Suffered by Hospitalized Patients
,
1989,
Medical care.
[4]
N. Wintfeld,et al.
Analyzing hospital mortality. The consequences of diversity in patient mix.
,
1991,
Journal of the American Medical Association (JAMA).
[5]
R. Kravitz,et al.
Differences in the mix of patients among medical specialties and systems of care. Results from the medical outcomes study.
,
1992,
JAMA.
[6]
T. Brennan,et al.
INCIDENCE OF ADVERSE EVENTS AND NEGLIGENCE IN HOSPITALIZED PATIENTS
,
2008
.
[7]
Troyen A. Brennan,et al.
Identification of adverse events occurring during hospitalization. A cross-sectional study of litigation, quality assurance, and medical records at two teaching hospitals.
,
1990,
Annals of internal medicine.
[8]
T. Brennan,et al.
Incidence of adverse events and negligence in hospitalized patients.
,
1991,
The New England journal of medicine.
[9]
J. Hanzal,et al.
A measure of malpractice
,
1994
.
[10]
C Gatsonis,et al.
Rates of avoidable hospitalization by insurance status in Massachusetts and Maryland.
,
1992,
JAMA.
[11]
S. Lipsitz,et al.
Socioeconomic status and risk for substandard medical care.
,
1992,
JAMA.
[12]
N. Andreasen,et al.
Reliability studies of psychiatric diagnosis. Theory and practice.
,
1981,
Archives of general psychiatry.
[13]
G. Yule.
On the Methods of Measuring Association between Two Attributes
,
1912
.
[14]
D Draper,et al.
Changes in quality of care for five diseases measured by implicit review, 1981 to 1986.
,
1990,
JAMA.
[15]
H. Rubin,et al.
Patient judgments of hospital quality. Response to questionnaire.
,
1990,
Medical care.
[16]
R H Brook,et al.
Adjusted hospital death rates: a potential screen for quality of medical care.
,
1987,
American journal of public health.
[17]
D M Berwick,et al.
Continuous improvement as an ideal in health care.
,
1989,
The New England journal of medicine.