Integrating Learning from Examples into the Search for Diagnostic Policies

This paper studies the problem of learning diagnostic policies from training examples. A diagnostic policy is a complete description of the decision-making actions of a diagnostician (i.e., tests followed by a diagnostic decision) for all possible combinations of test results. An optimal diagnostic policy is one that minimizes the expected total cost, which is the sum of measurement costs and misdiagnosis costs. In most diagnostic settings, there is a tradeoff between these two kinds of costs. This paper formalizes diagnostic decision making as a Markov Decision Process (MDP). The paper introduces a new family of systematic search algorithms based on the AO* algorithm to solve this MDP. To make AO* efficient, the paper describes an admissible heuristic that enables AO* to prune large parts of the search space. The paper also introduces several greedy algorithms including some improvements over previously-published methods. The paper then addresses the question of learning diagnostic policies from examples. When the probabilities of diseases and test results are computed from training data, there is a great danger of overfitting. To reduce overfitting, regularizers are integrated into the search algorithms. Finally, the paper compares the proposed methods on five benchmark diagnostic data sets. The studies show that in most cases the systematic search methods produce better diagnostic policies than the greedy methods. In addition, the studies show that for training sets of realistic size, the systematic search algorithms are practical on today's desktop computers.

[1]  Alberto Martelli,et al.  Additive AND/OR Graphs , 1973, IJCAI.

[2]  Nils J. Nilsson,et al.  Principles of Artificial Intelligence , 1980, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  H. Brachinger,et al.  Decision analysis , 1997 .

[4]  orgTom Fawcett fawcett Robust Classiication for Imprecise Environments , 1989 .

[5]  Steven W. Norton Generating Better Decision Trees , 1989, IJCAI.

[6]  Krishna R. Pattipati,et al.  Application of heuristic search and information theory to sequential fault diagnosis , 1990, IEEE Trans. Syst. Man Cybern..

[7]  Geoffrey E. Hinton,et al.  A time-delay neural network architecture for isolated word recognition , 1990, Neural Networks.

[8]  Eric Horvitz,et al.  An Approximate Nonmyopic Computation for Value of Information , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[10]  L. van der Gaag,et al.  Selective evidence gathering for diagnostic belief networks , 1993 .

[11]  David Heckerman,et al.  Troubleshooting Under Uncertainty , 1994 .

[12]  Michael J. Pazzani,et al.  Reducing Misclassification Costs , 1994, ICML.

[13]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[14]  S. T. Buckland,et al.  An Introduction to the Bootstrap. , 1994 .

[15]  David Poole,et al.  DECISION GRAPHS: Algorithms and Applications to Influence Diagram Evaluation and High-Level Path Planning Under Uncertainty , 1994 .

[16]  David Heckerman,et al.  Decision-theoretic troubleshooting , 1995, CACM.

[17]  Russell G. Almond,et al.  On Test Selection Strategies for Belief Networks , 1995, AISTATS.

[18]  Peter D. Turney Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic Decision Tree Induction Algorithm , 1994, J. Artif. Intell. Res..

[19]  Nir Friedman,et al.  Building Classifiers Using Bayesian Networks , 1996, AAAI/IAAI, Vol. 2.

[20]  Richard Washington,et al.  BI-POMDP: Bounded, Incremental, Partially-Observable Markov-Model Planning , 1997, ECP.

[21]  Finn Verner Jensen,et al.  Myopic Value of Information in Influence Diagrams , 1997, UAI.

[22]  Carla E. Brodley,et al.  Pruning Decision Trees with Misclassification Costs , 1998, ECML.

[23]  Eric A. Hansen,et al.  Solving POMDPs by Searching in Policy Space , 1998, UAI.

[24]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[25]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[26]  Pedro M. Domingos MetaCost: a general method for making classifiers cost-sensitive , 1999, KDD '99.

[27]  Thomas G. Dietterich,et al.  Bootstrap Methods for the Cost-Sensitive Evaluation of Classifiers , 2000, ICML.

[28]  Bianca Zadrozny,et al.  Learning and making decisions when costs and probabilities are both unknown , 2001, KDD '01.

[29]  Thomas G. Dietterich,et al.  Pruning Improves Heuristic Search for Cost-Sensitive Learning , 2002, ICML.

[30]  Peter D. Turney Types of Cost in Inductive Concept Learning , 2002, ArXiv.

[31]  Dan Roth,et al.  Learning cost-sensitive active classifiers , 2002, Artif. Intell..

[32]  Thomas G. Dietterich,et al.  Learning cost-sensitive diagnostic policies from data , 2003 .

[33]  Tom Fawcett,et al.  Robust Classification for Imprecise Environments , 2000, Machine Learning.

[34]  Paul E. Utgoff,et al.  Incremental Induction of Decision Trees , 1989, Machine Learning.

[35]  Ming Tan,et al.  Cost-Sensitive Learning of Classification Knowledge and Its Applications in Robotics , 1993, Machine Learning.

[36]  Michael L. Littman,et al.  An Instance-Based State Representation for Network Repair , 2004, AAAI.

[37]  Gregory F. Cooper,et al.  A Bayesian method for the induction of probabilistic networks from data , 1992, Machine Learning.

[38]  Marlon Núñez The use of background knowledge in decision tree induction , 2004, Machine Learning.

[39]  Tom Fawcett,et al.  Adaptive Fraud Detection , 1997, Data Mining and Knowledge Discovery.

[40]  Valentina Bayer-Zubek Learning diagnostic policies from examples by systematic search , 2004, UAI 2004.

[41]  Nir Friedman,et al.  Bayesian Network Classifiers , 1997, Machine Learning.

[42]  Michael Kearns,et al.  Near-Optimal Reinforcement Learning in Polynomial Time , 2002, Machine Learning.

[43]  Ming Tan,et al.  Cost-sensitive learning of classification knowledge and its applications in robotics , 2004, Machine Learning.

[44]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.