Establishing the Coherence of an Explanation to Improve Refinement of an I complete Knowledge ase

The power of knowledge acquisition systems that employ failure-driven learning derives from two main sources: an effective global credit assignment process that determines when to acquire new knowledge by watching an expert’s behavior, and an efficient local credit assignment process that determines what new knowledge will be created for completing a failed explanation of an expert’s action. Because an input (e.g., observed action) to a failure-driven learning system can generate multiple explanations, a learning opportunity to extend the incomplete domain theory can go unobserved. This paper describes a failure-driven learning with a context analysis mechanism as a method to constrain explanations and thereby increase the number of learning opportunities. Experimentation using a synthetic expert system as the observed expert shows that the use of context analysis increases the number of learning opportunities by about 47%, and increases the overall amount of improvement to the expert system by around 10%.