Existing machine learning techniques have only limited capabilities of handling computationally intractable domains. This research extends explanation-based learning techniques in order to overcome such limitations. It is based on a strategy of sacrificing theory accuracy in order to gain tractability. Intractable theories are approximated by incorporating simplifying assumptions. Explanations of teacher-provided examples are used to guide a search for accurate approximate theories. The paper begins with an overview of this learning technique. Then a typology of simplifying assumptions is presented along with a technique for representing such assumptions in terms of generic functions. Methods for generating and searching a space of approximate theories are discussed. Empirical results from a testbed domain are presented. Finally, some implications of this research for the field of explanation-based learning are also discussed.
[1]
Steve Ankuo Chien.
Simplifications in Temporal Persistence: An Approach to the Intractable Domain Theory Problem in Explanation-Based Learning
,
1987
.
[2]
Gerald DeJong,et al.
The Classification, Detection and Handling of Imperfect Theory Problems
,
1987,
IJCAI.
[3]
Daniel G. Bobrow,et al.
Object-Oriented Programming: Themes and Variations
,
1989,
AI Mag..
[4]
Scott Bennett,et al.
Approximation in Mathematical Domains
,
1987,
IJCAI.
[5]
R. M. Keller,et al.
The role of explicit contextual knowledge in learning concepts to improve performance
,
1987
.