Compiled knowledge, which allows macro inference steps through an explanation space, can enable explanation-based learning (EBL) systems to reason efficiently in complex domains. Without this knowledge, the explanation of goal concepts is not generally feasible; moreover, the problem of finding the most general operational concept definition is intractable. Unfortunately, the use of compiled knowledge leads to explanations which yield overly specific concept definitions. These concept definitions may be overly specific in one of two ways: either a similar concept definition with one or more constants changed to variables is operational, or a concept definition which is more general, according to the implication rules of the domain theory, is operational. This paper introduces a method (IMEX) for modifying, in a directed manner, the explanation structures of goal concepts that have been derived using compiled knowledge. In this way, more general operational concept definitions may be obtained.
[1]
Richard Fikes,et al.
Learning and Executing Generalized Robot Plans
,
1993,
Artif. Intell..
[2]
Richard Edward Cullingford,et al.
Script application: computer understanding of newspaper stories.
,
1977
.
[3]
Richard E. Korf,et al.
Planning as Search: A Quantitative Approach
,
1987,
Artif. Intell..
[4]
Scott Bennett,et al.
Approximation in Mathematical Domains
,
1987,
IJCAI.
[5]
Haym Hirsh,et al.
Explanation-based Generalization in a Logic-Programming Environment
,
1987,
IJCAI.
[6]
Richard M. Keller,et al.
Defining Operationality for Explanation-Based Learning
,
1987,
Artificial Intelligence.
[7]
Haym Hirsh,et al.
Reasoning about Operationality for Explanation-Based Learning
,
1988,
ML.
[8]
Stuart J. Russell,et al.
Boundaries of Operationality
,
1988,
ML.