Recorded Criteria as a “Gold Standard” for Sensitivity and Specificity Estimates of Surveillance of Nosocomial Infection: A Novel Method to Measure Job Performance

Abstract Objectives: To compare the accuracy of infection control practitioners' (ICPs') classifications of operative site infection in Florida Consortium for Infection Control (FCIC) hospitals, in two time periods, 1990 to 1991 and 1991 to 1992, and to estimate the effect of duration of surveillance experience on that accuracy. Methods: Medical record reviewers examined records of all patients classified by an ICP as infected, to distinguish false-positives from true infections based on evidence of standard infection criteria and the ICP's contemporaneous clinical observations. Reviewers also examined a random sample of 100 records from patients classified as noninfected for evidence of undetected infections (false-negatives). These observations permitted estimates of the sensitivity and specificity of each ICP's classification of infection status. Setting: Fourteen FCIC communit:y hospitals at which performance of 16 ICPs was monitored. Results: There was a strong linear trend relating increasing sensitivity to numbers of years of ICP surveillance experience (P<.001). For ICPs with <4 years of experience, satisfactory sensitivity (≥80%) was reached in only one of 10 ICP-years of observation. For ICPs with ≥4 years' experience, satisfactory sensitivity was achieved for 14 of 18 person-years (P=.001). Estimated specificity was 97% to 100% for all ICP-years observed. Conclusions: ICPs with <4 years of surveillance experience in FCIC community hospitals rarely achieved a satisfactory sensitivity estimate, whereas ICPs with ≥4 years' experience generally did. Monitoring ICP surveillance accuracy through retrospective medical record audits offers an objective approach to evaluating ICP performance and to interpreting infection rates at different hospitals.

[1]  R. Wenzel,et al.  Postoperative wound infection after total abdominal hysterectomy: a controlled study of the increased duration of hospital stay and trends in postoperative wound infection. , 1993, American journal of infection control.

[2]  D. Cardo,et al.  Validation of Surgical Wound Surveillance , 1993, Infection Control &#x0026; Hospital Epidemiology.

[3]  G. Taylor,et al.  Effect of surgeon's diagnosis on surgical wound infection rates. , 1990, American journal of infection control.

[4]  R. Wenzel,et al.  NOSOCOMIAL INFECTIONS: VALIDATION OF SURVEILLANCE AND COMPUTER MODELING TO IDENTIFY PAT AT RISK , 1990 .

[5]  N. J. Ehrenkranz,et al.  Priorities for surveillance and cost-effective control of postoperative infection. , 1988, Archives of surgery.

[6]  N. J. Ehrenkranz,et al.  The Efficacy of a Florida Hospital Consortium for Infection Control: 1975-1982 , 1986, Infection Control.

[7]  J. Mcgowan,et al.  Methodologic issues in hospital epidemiology. I. Rates, case-finding, and interpretation. , 1981, Reviews of infectious diseases.

[8]  D. McClish,et al.  The accuracy of retrospective chart review in measuring nosocomial infection rates. Results of validation studies in pilot hospitals. , 1980, American journal of epidemiology.

[9]  R. Wenzel,et al.  Development of a statewide program for surveillance and reporting of hospital-acquired infections. , 1979, The Journal of infectious diseases.

[10]  W. Blakemore,et al.  Analysis and Significance of Nosocomial Infection Rates , 1974, Annals of surgery.

[11]  E. Fry Donald THE SURGICAL INFECTIONS , 1932, ADM; revista de la Asociacion Dental Mexicana.