Induction in an Abstraction Space: A Form of Constructive Induction

We report on a learning system MIRO which performs supervised concept formation in an abstraction space. Given a domain theory, the method constructs this abstraction space by deduction over instances, and then performs induction in it rather than the initial space defined by instances alone. It is also possible to regard MIRO as a variant of constructive induction. The Vapnik-Chervonenkis model suggests that learning in an abstraction space can result in a substantial speedup, and we provide empirical studies which validate this proposition. We also show that learning in an abstraction space can reduce the number of false negative and false postive classifications because coincidental patterns are filtered by the deduction process. The method is able to extend an incomplete domain theory represented as at tribute-value pairs with a set of rules that represent a disjunctive concept derived from a batch of training instances.