Constructing and Refining Causal Explanations from an Inconsistent Domain Theory

Recent work in the field of machine learning has demonstrated the utility of explanation formation as a guide to generalization. Most of these investigations have concentrated on the formation of explanations from consistent domain theories. I present an approach to forming explanations from domain theories which are inconsistent due to the presence of abstractions which suppress potentially relevant detail. In this approach, explanations are constructed to support reasoning tasks and are refined in a failure-driven manner. The elaboration of explanations is guided by the structuring of domain theories into layers of abstractions. This work is part of a larger effort to develop a causal modelling system which forms explanations of the underlying causal relations in physical systems. This system utilizes an inconsistent, common-sense theory of the mechanisms which operate in physical systems.