A Review of the Application of Information Theory to Clinical Diagnostic Testing

The fundamental information theory functions of entropy, relative entropy, and mutual information are directly applicable to clinical diagnostic testing. This is a consequence of the fact that an individual’s disease state and diagnostic test result are random variables. In this paper, we review the application of information theory to the quantification of diagnostic uncertainty, diagnostic information, and diagnostic test performance. An advantage of information theory functions over more established test performance measures is that they can be used when multiple disease states are under consideration as well as when the diagnostic test can yield multiple or continuous results. Since more than one diagnostic test is often required to help determine a patient’s disease state, we also discuss the application of the theory to situations in which more than one diagnostic test is used. The total diagnostic information provided by two or more tests can be partitioned into meaningful components.

[1]  H. Seal Studies in the history of probability and statistics , 1977 .

[2]  C. H. Coombs,et al.  Mathematical psychology : an elementary introduction , 1970 .

[3]  Peter Harremoës Entropy - New Editor-in-Chief and Outlook , 2009, Entropy.

[4]  M. Tribus Thermostatics and thermodynamics , 1961 .

[5]  A. M. Turing,et al.  Studies in the History of Probability and Statistics. XXXVII A. M. Turing's statistical work in World War II , 1979 .

[6]  D. M. Green,et al.  Signal detection theory and psychophysics , 1966 .

[7]  Leonard A. Smith,et al.  Evaluating Probabilistic Forecasts Using Information Theory , 2002 .

[8]  R. Rifkin,et al.  Maximum Shannon Information Content of Diagnostic Medical Testing , 1985, Medical decision making : an international journal of the Society for Medical Decision Making.

[9]  Schumacher,et al.  Quantum coding. , 1995, Physical review. A, Atomic, molecular, and optical physics.

[10]  M. Tribus,et al.  Energy and information , 1971 .

[11]  Joel E. Cohen,et al.  Information theory and music , 2007 .

[12]  W. Benish The channel capacity of a diagnostic test as a function of test sensitivity and test specificity , 2015, Statistical methods in medical research.

[13]  R. A. Leibler,et al.  On Information and Sufficiency , 1951 .

[14]  E Somoza,et al.  Evaluation and optimization of diagnostic tests using receiver operating characteristic analysis and information theory. , 1989, International journal of bio-medical computing.

[15]  E. Somoza,et al.  Comparing and Optimizing Diagnostic Tests , 1992, Medical decision making : an international journal of the Society for Medical Decision Making.

[16]  R. Rifkin,et al.  Bayesian analysis of electrocardiographic exercise stress testing. , 1977, The New England journal of medicine.

[17]  W. Benish,et al.  Intuitive and Axiomatic Arguments for Quantifying Diagnostic Test Performance in Units of Information , 2009, Methods of Information in Medicine.

[18]  R. Hartley Transmission of information , 1928 .

[19]  I. Good,et al.  The Diagnostic Process with Special Reference to Errors , 1971, Methods of Information in Medicine.

[20]  P. Janssen,et al.  Bridging the gap between clinical practice and diagnostic clinical epidemiology: pilot experiences with a didactic model based on a logarithmic scale. , 2007, Journal of evaluation in clinical practice.

[21]  T. D. Schneider,et al.  Information content of binding sites on nucleotide sequences. , 1986, Journal of molecular biology.

[22]  W. Benish Relative Entropy as a Measure of Diagnostic Information , 1999, Medical decision making : an international journal of the Society for Medical Decision Making.

[23]  Sang Joon Kim,et al.  A Mathematical Theory of Communication , 2006 .

[24]  S. W. Halpern,et al.  Application of Information Theory to Clinical Diagnostic Testing. The Electrocardiographic Stress Test , 1981, Circulation.

[25]  D. Hillman The probability of induction , 1963 .

[26]  W C Lee,et al.  Selecting diagnostic tests for ruling out or ruling in disease: the use of the Kullback-Leibler distance. , 1999, International journal of epidemiology.

[27]  C. Kittel,et al.  Thermal Physics, 2nd ed. , 1998 .

[28]  N. McRoberts,et al.  Information graphs for binary predictors. , 2014, Phytopathology.

[29]  W. Benish Mutual Information as an Index of Diagnostic Test Performance , 2003, Methods of Information in Medicine.

[30]  J. Aczel,et al.  On Measures of Information and Their Characterizations , 2012 .

[31]  J. Hanley,et al.  The meaning and use of the area under a receiver operating characteristic (ROC) curve. , 1982, Radiology.

[32]  G. Kovacs,et al.  Combined use of clinical assessment and d‐dimer to improve the management of patients presenting to the emergency department with suspected deep vein thrombosis (the EDITED Study) , 2003, Journal of thrombosis and haemostasis : JTH.

[33]  C E Metz,et al.  Evaluation of receiver operating characteristic curve data in terms of information theory, with applications in radiography. , 1973, Radiology.

[34]  Gareth Hughes,et al.  Applications of Information Theory to Epidemiology , 2012, Entropy.

[35]  E. Keeler,et al.  Primer on certain elements of medical decision making. , 1975, The New England journal of medicine.

[36]  David J. Duffy Problems, challenges and promises: perspectives on precision medicine , 2016, Briefings Bioinform..

[37]  D. Longo,et al.  Precision medicine--personalized, problematic, and promising. , 2015, The New England journal of medicine.