Explanation-Based Methods for Simplifying Intractable Theories

Existing machine learning programs possess only limited abilities to exploit previously acquired background knowledge. A technique called "explanationbased learning" (EBL) has recently been developed to address this problem. EBL is limited, however, by a requirement that the background knowledge meet restrictive conditions. EBL cannot operate without a complete, correct and tractable theory of the domain under study. In many cases no adequate domain theory can be found. The research proposed here will address this limitation. It will be primarily directed toward extending EBL methods to handle intractable theories. Techniques will be developed for using explanations of examples to make domain theories more tractable. The explanations will be used to find assumptions that can simplify intractable theories. A useful class of assumptions, called "optimistic assumptions·, will be defined informally. A program will be developed to learn assumptions drawn from this class. The program will be tested in the domain of "hearts· and possibly other domains as well. This research will be significant inasmuch as the "optimistic" assumptions appear to be applicable to a wide variety of domains. The research will also be relevant to the problems of incomplete and incorrect theories as well as the problem of intractability.