Peer review in clinical radiology practice.

Summary statistics and comparison of per-formance measures generated for each physi-cian by modality should be collated [13]. Radi -ologists should be compared with peers in their own facilities in terms of major difference in image interpretation. This comparison needs to take into account the volume of cases in-terpreted by an individual radiologist to avoid bias. Radiologists who have more clinical days than others may have a higher number of cases with discrepant results on peer review; there-fore, error numbers tallied and normalized to the number of clinical days worked would bet-ter represent the work. Because there are dif-ferences in misinterpretation and difficult-case disagreement rates among imaging modalities, comparison needs to be specific to imaging modality as well [14]. Limitations Radiologist commitment to continuous peer review is relatively limited. Increased workload, radiologist shortage, substantial-ly decreased payments per service, and resis -tance of radiologists to anything that increas -es workloads or costs even by a small amount have been quoted as causes of reduced com-pliance in the RADPEER program [14]. In addition, reviewing radiologists may be re-luctant to perform peer review because of a potential negative influence on their relation -ship with their colleagues. Unclear policies and procedures of peer review, negative at-titudes toward peer review (seeing it as con-trolling rather than educating or learning for improvement), disbelief that peer review would lead to worthwhile results, and fear of information being made public through the legal system contribute to diminished com-mitment and effort [5].Methods, such as reactive or proactive re-views of cases without a reference standard or definitive diagnosis, may simply represent opinions because they are not verified by pa-thology or clinical follow-up. In such cases, initial findings may not be proof of error in performance and should be used only as a trigger for further evaluation [14].

[1]  James P Borgstede,et al.  RADPEER quality assurance program: a multifacility study of interpretive disagreement rates. , 2004, Journal of the American College of Radiology : JACR.

[2]  R. J. Brenner,et al.  Radiology and medical malpractice claims: a report on the practice standards claims survey of the Physician Insurers Association of America and the American College of Radiology. , 1998, AJR. American journal of roentgenology.

[3]  B. Hillman,et al.  Quality and variability in diagnostic radiology. , 2004, Journal of the American College of Radiology : JACR.

[4]  Sallie J. Weaver,et al.  Cognitive and system factors contributing to diagnostic errors in radiology. , 2013, AJR. American journal of roentgenology.

[5]  P. Cascade,et al.  Quality improvement in diagnostic radiology. , 1990, AJR. American journal of roentgenology.

[6]  M. Halsted Radiology peer review as an opportunity to reduce errors and improve patient care. , 2004, Journal of the American College of Radiology : JACR.

[7]  V. Jackson,et al.  RADPEER scoring white paper. , 2009, Journal of the American College of Radiology : JACR.

[8]  L. Donnelly Performance-based assessment of radiology practitioners: promoting improvement in accordance with the 2007 joint commission standards. , 2007, Journal of the American College of Radiology : JACR.

[9]  Stephen D. Persell,et al.  ACCF/AHA/AMA-PCPI 2011 performance measures for adults with coronary artery disease and hypertension: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Performance Measures and the American Medical Association-Physician Consortium for Performance Impr , 2011, Circulation.

[10]  Kerri Palamara,et al.  Structured feedback from referring physicians: a novel approach to quality improvement in radiology reporting. , 2013, AJR. American journal of roentgenology.

[11]  Barton F Branstetter,et al.  Optimizing radiology peer review: a mathematical model for selecting future cases based on prior errors. , 2010, Journal of the American College of Radiology : JACR.

[12]  C. Violato,et al.  Assessment of radiology physicians by a regulatory authority. , 2008, Radiology.

[13]  Jacob Sosna,et al.  Peer review in diagnostic radiology: current state and a vision for the future. , 2009, Radiographics : a review publication of the Radiological Society of North America, Inc.

[14]  Charlie Agee,et al.  Improving the peer review process: develop a professional review committee for better and quicker results. , 2007, Healthcare executive.

[15]  D. Larson,et al.  Rethinking peer review: what aviation can teach radiology about performance improvement. , 2011, Radiology.

[16]  K. Berbaum,et al.  Error in radiology: classification and lessons in 182 cases presented at a problem case conference. , 1992, Radiology.

[17]  Lane F Donnelly,et al.  Performance-based assessment of radiology faculty: a practical plan to promote improvement and meet JCAHO standards. , 2005, AJR. American journal of roentgenology.

[18]  R. Grol,et al.  Quality improvement by peer review in primary care: a practical guide. , 1994, Quality in health care : QHC.