The development of indicators to measure the quality of clinical care in emergency departments following a modified-delphi approach.

OBJECTIVE To develop and apply a systematic approach to identify and define valid, relevant, and feasible measures of emergency department (ED) clinical performance. METHODS An extensive literature review was conducted to identify clinical conditions frequently treated in most EDs, and clinically relevant outcomes to evaluate these conditions. Based on this review, a set of condition-outcome pairs was defined. An expert panel was convened and a Modified-Delphi process was used to identify specific condition-outcome pairs where the panel felt there was a link between quality of care for the condition and a specific outcome. Next, for highly rated condition-outcome pairs, specific measurable indicators were identified in the literature. The panelists rated these indicators on their relevance to ED performance and need for risk adjustment. The feasibility of calculating these indicators was determined by applying them to a routinely collected data set. RESULTS Thirteen clinical conditions and eight quality-of-care outcomes (mortality, morbidity, admissions, recurrent visits, follow-up with primary care, length of stay, diagnostics, and resource use) were identified from the literature (104 pairs). The panel selected 21 condition-outcome pairs, representing eight of 13 clinical conditions. Then, the panel selected 29 specific clinical indicators, representing the condition-outcome pairs, to measure ED performance. It was possible to calculate eight of these indicators, covering five clinical conditions, using a routinely collected data set. CONCLUSIONS Using a Modified-Delphi process, it was possible to identify a series of condition-outcome pairs that panelists felt were potentially related to ED quality of care, then define specific indicators for many of these condition-outcome pairs. Some indicators could be measured using an existing data set. The development of sound clinical performance indicators for the ED is possible, but the feasibility of measuring them will be dependent on the availability and accessibility of high-quality data.

[1]  D. Sklar,et al.  Health care reform and emergency medicine. , 1995, Annals of emergency medicine.

[2]  R. Brook,et al.  Consensus methods: characteristics and guidelines for use. , 1984, American journal of public health.

[3]  Graham A. Colditz,et al.  Systematic Reviews in Health Care: Frontmatter , 2001 .

[4]  P. Shekelle,et al.  The effect of panel membership and feedback on ratings in a two-round Delphi survey: results of a randomized controlled trial. , 1999, Medical care.

[5]  G A Jelinek,et al.  Re-engineering an Australian emergency department: can we measure success? , 1999, Journal of quality in clinical practice.

[6]  D A Redelmeier,et al.  Emergency department overcrowding following systematic hospital restructuring: trends at twenty hospitals over ten years. , 2001, Academic emergency medicine : official journal of the Society for Academic Emergency Medicine.

[7]  I. Piña,et al.  Evaluating quality of care for patients with heart failure. , 2000, Circulation.

[8]  I. Stiell,et al.  The "real" Ottawa ankle rules. , 1996, Annals of emergency medicine.

[9]  W. Meggs,et al.  Trends in emergency department utilization, 1988-1997. , 1999, Academic emergency medicine : official journal of the Society for Academic Emergency Medicine.

[10]  K N Lohr,et al.  A strategy for quality assurance in Medicare. , 1990, The New England journal of medicine.

[11]  A. Shroyer,et al.  Why it is Important to Demonstrate Linkages Between Outcomes of Care and Processes and Structures of Care , 1995, Medical care.

[12]  A. Donabedian The quality of care. How can it be assessed? , 1988, JAMA.

[13]  M. Kennedy,et al.  Quality management in Australian emergency medicine: translation of theory into practice. , 1999, International journal for quality in health care : journal of the International Society for Quality in Health Care.