Computerised diagnosis in acute psychiatry: validity of CIDI-Auto against routine clinical diagnosis.

The validity of the self-administered CIDI-Auto for detecting ICD-10 diagnoses was assessed in a study of 126 patients admitted to an acute psychiatry unit. A comparison was made between the level of agreement of the CIDI-Auto with a psychiatrist and that between two psychiatrists. The CIDI-Auto generated an average of 2.3 diagnoses per subject, and the psychiatrists 1.3. Agreement measured by overall agreement and by Kappas between the CIDI-Auto and the psychiatrist's principal diagnosis was poor, whereas agreement between psychiatrists was good. At the level of general diagnostic class (e.g. substance use disorder, schizophrenic disorder, mood disorder), agreement between CIDI-Auto and psychiatrist on principal diagnosis was poor, Kappa = 0.23, while agreement between psychiatrists was good, Kappa = 0.69. The findings indicate that the self-administered CIDI-Auto has poor validity measured against clinical diagnosis for hospitalised patients of acute psychiatric services. Poor validity of computer-based diagnosis limits the diagnostic utility of these methods in clinical situations. It also creates uncertainty of diagnostic findings in survey use.

[1]  G. Andrews,et al.  A Comparison of Two Structured Diagnostic Interviews: CIDI and SCAN , 1995, The Australian and New Zealand journal of psychiatry.

[2]  P. McGuffin,et al.  A comparison between the Present State Examination and the Composite International Diagnostic Interview. , 1987, Archives of general psychiatry.

[3]  T. Byers,et al.  The Association between Fruit and Vegetable Intake and Chronic Disease Risk Factors , 1996, Epidemiology.

[4]  G. Andrews,et al.  Procedural validity of the computerized version of the Composite International Diagnostic Interview (CIDI-Auto) in the anxiety disorders , 1995, Psychological Medicine.

[5]  D. C. Ross Chi Square Tests for the Difference between Correlated Weighted Kappas and Correlated Unweighted Kappas' , 1992 .

[6]  J. Fleiss,et al.  Quantification of agreement in psychiatric diagnosis revisited. , 1987, Archives of general psychiatry.

[7]  M. Folstein,et al.  Brief report on the clinical reappraisal of the Diagnostic Interview Schedule carried out at the Johns Hopkins site of the Epidemiological Catchment Area Program of the NIMH , 1985, Psychological Medicine.

[8]  L. Robins Epidemiology: reflections on testing the validity of psychiatric interviews. , 1985, Archives of general psychiatry.

[9]  H. Wittchen Reliability and validity studies of the WHO--Composite International Diagnostic Interview (CIDI): a critical review. , 1994, Journal of psychiatric research.

[10]  K. Bucholz,et al.  Comparison of Composite International Diagnostic Interview and clinical DSM‐III‐R criteria checklist diagnoses , 1992, Acta psychiatrica Scandinavica.

[11]  A. Feinstein,et al.  High agreement but low kappa: I. The problems of two paradoxes. , 1990, Journal of clinical epidemiology.

[12]  T. Byrt How good is that agreement? , 1996, Epidemiology.

[13]  J H Greist,et al.  Comparison of computer- and interviewer-administered versions of the Diagnostic Interview Schedule. , 1987, Hospital & community psychiatry.

[14]  David R. L. Worthington,et al.  Assessment of agreement among several raters formulating multiple diagnoses. , 1981, Journal of psychiatric research.

[15]  J. Bartko Measurement and reliability: statistical thinking considerations. , 1991, Schizophrenia bulletin.

[16]  L. Robins,et al.  Clinical Observation of Assessment Using the Composite International Diagnostic Interview (CIDI) , 1992, British Journal of Psychiatry.

[17]  A. Feinstein,et al.  High agreement but low kappa: II. Resolving the paradoxes. , 1990, Journal of clinical epidemiology.