Interpreting diagnostic accuracy studies for patient care

A diagnostic test accuracy study provides evidence on how well a test correctly identifies or rules out disease and informs subsequent decisions about treatment for clinicians, their patients, and healthcare providers. The authors highlight several different ways in which data from diagnostic test accuracy studies can be presented and interpreted, and discuss their advantages and disadvantages.

[1]  S D Walter,et al.  The partial area under the summary ROC curve , 2005, Statistics in medicine.

[2]  S. Baker The central role of receiver operating characteristic (ROC) curves in evaluating tests for the early detection of cancer. , 2005, Journal of the National Cancer Institute.

[3]  M. Pencina,et al.  Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond , 2008, Statistics in medicine.

[4]  J D Habbema,et al.  Application of Treatment Thresholds to Diagnostic-test Evaluation , 1997, Medical decision making : an international journal of the Society for Medical Decision Making.

[5]  R. F. Wagner,et al.  Assessment of medical imaging systems and computer aids: a tutorial review. , 2007, Academic radiology.

[6]  J R Beck,et al.  Decision-making Studies in Patient Management , 1991, Medical decision making : an international journal of the Society for Medical Decision Making.

[7]  J. Hilden The Area under the ROC Curve and Its Competitors , 1991, Medical decision making : an international journal of the Society for Medical Decision Making.

[8]  Lisa M. Schwartz,et al.  PSYCHOLOGICAL SCIENCE IN THE PUBLIC INTEREST Helping Doctors and Patients Make Sense of Health Statistics , 2022 .

[9]  W Zucchini,et al.  On the statistical analysis of ROC curves. , 1989, Statistics in medicine.

[10]  R. F. Wagner,et al.  Reader Variability in Mammography and Its Implications for Expected Utility over the Population of Readers and Cases , 2004, Medical decision making : an international journal of the Society for Medical Decision Making.

[11]  M. Pepe,et al.  Comparisons of Predictive Values of Binary Medical Diagnostic Tests for Paired Designs , 2000, Biometrics.

[12]  Kyle J Myers,et al.  Evaluating imaging and computer-aided detection and diagnosis devices at the FDA. , 2012, Academic radiology.

[13]  C. Haglund,et al.  CA 242, a new tumour marker for pancreatic cancer: a comparison with CA 19-9, CA 50 and CEA. , 1994, British Journal of Cancer.

[14]  David J Hand,et al.  Evaluating diagnostic tests: The area under the ROC curve and the balance of errors , 2010, Statistics in medicine.

[15]  Mitchell H. Gail,et al.  A family of nonparametric statistics for comparing diagnostic markers with paired or unpaired data , 1989 .

[16]  Y. Hatta,et al.  DIAGNOSIS OF PANCREATIC CARCINOMA , 1975, The Lancet.

[17]  P. Bossuyt,et al.  The diagnostic odds ratio: a single indicator of test performance. , 2003, Journal of clinical epidemiology.

[18]  David J. Hand,et al.  Measuring classifier performance: a coherent alternative to the area under the ROC curve , 2009, Machine Learning.

[19]  Nancy R Cook,et al.  Comments on ‘Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond’ by M. J. Pencina et al., Statistics in Medicine (DOI: 10.1002/sim.2929) , 2008, Statistics in medicine.

[20]  C. Niederau,et al.  Diagnosis of Pancreatic Carcinoma: Imaging Techniques and Tumor Markers , 1992, Pancreas.

[21]  E. Elkin,et al.  Decision Curve Analysis: A Novel Method for Evaluating Prediction Models , 2006, Medical decision making : an international journal of the Society for Medical Decision Making.

[22]  Niall M. Adams,et al.  Comparing classifiers when the misallocation costs are uncertain , 1999, Pattern Recognit..

[23]  C S Peirce,et al.  The numerical measure of the success of predictions. , 1884, Science.

[24]  L. Liberman,et al.  Breast imaging reporting and data system (BI-RADS). , 2002, Radiologic clinics of North America.

[25]  D. Richards,et al.  Understanding uncertainty , 2012, Evidence-Based Dentistry.

[26]  Elena B. Elkin,et al.  Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers , 2008, BMC Medical Informatics Decis. Mak..

[27]  R. F. Wagner,et al.  Assessment of medical imaging and computer-assist systems: lessons from recent experience. , 2002, Academic radiology.

[28]  D. Kent,et al.  Using Treatment-tradeoff Preferences to Select Diagnostic Strategies , 1993, Medical decision making : an international journal of the Society for Medical Decision Making.

[29]  Xiao-Hua Zhou,et al.  The need for reorientation toward cost‐effective prediction: Comments on ‘Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond’ by Pencina et al., Statistics in Medicine (DOI: 10.1002/sim.2929) , 2008, Statistics in medicine.

[30]  Sander Greenland,et al.  The need for reorientation toward cost‐effective prediction: Comments on ‘Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond’ by M. J. Pencina et al., Statistics in Medicine (DOI: 10.1002/sim.2929) , 2008, Statistics in medicine.

[31]  David Gur,et al.  Incorporating utility-weights when comparing two diagnostic systems: a preliminary assessment. , 2005, Academic radiology.

[32]  M S Pepe,et al.  Comments on ‘Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond’ by M. J. Pencina et al., Statistics in Medicine (DOI: 10.1002/sim.2929) , 2008, Statistics in medicine.

[33]  Yvonne Vergouwe,et al.  Validity of prognostic models: when is a model clinically useful? , 2002, Seminars in urologic oncology.