Selective Reformulation of Examples in Concept Learning

Abstract The fundamental tradeoff that is well known in Knowledge Representation and Reasoning affects Concept Learning from Examples too. Representation of learning examples using attribute-value has proved to support efficient inductive algorithms but limited expressiveness whereas more expressive representation languages, typically subsets of First Order Logic (FOL), are supported by less efficient algorithms. In fact, an underlying problem is that of the number of different ways of matching examples, just one in attribute-value representation and potentially large in FOL representation. This paper describes a novel approach to perform representation shifts on learning examples. The structure of these learning examples, initially represented using a subset of FOL-based languages, is reformulated so as to produce new learning examples that are represented using an attribute-value language. What is considered to be an adequate structure varies according to the learning task. We introduce the notion of morion (from the Greek) to qualify this structure and show, through a concrete example, the advantages it offers. We then describe an algorithm which reformulates learning examples automatically and go on to analyze its complexity. This approach to deductive reformulation is implemented in the REMO system that has been experimented on the learning of the construction of Chinese characters.

[1]  Patricia Riddle,et al.  Automating Problem Reformulation , 1990 .

[2]  Stan Matwin,et al.  Constructive Inductive Logic Programming , 1993, IJCAI.

[3]  Devika Subramanian,et al.  A Theory of Justified Reformulations , 1989, ML.

[4]  Stephen Muggleton,et al.  Efficient Induction of Logic Programs , 1990, ALT.

[5]  Devika Subramanian Representational Issues in Machine Learning , 1989, ML.

[6]  Jean-Daniel Zucker Jacques Mathieu,et al.  Machine Learning Contributions to a Guided Discovery Tutoring Environment for Chinese Characters , 1993 .

[7]  Ryszard S. Michalski,et al.  A theory and methodology of inductive learning , 1993 .

[8]  George Drastal,et al.  Induction in an Abstraction Space: A Form of Constructive Induction , 1989, IJCAI.

[9]  Céline Rouveirol,et al.  Semantic Model for Induction of First Order Theories , 1991, IJCAI.

[10]  Gilles Bisson,et al.  Learning in FOL with a Similarity Measure , 1992, AAAI.

[11]  Sharad Saxena Evaluating alternative Instance Representations , 1989, ML.

[12]  Jeffrey C. Schlimmer,et al.  Paradigms for machine learning , 1991 .

[13]  Zhongzhi Shi Principles of machine learning , 1992 .

[14]  Aristotle,et al.  Les Parties des Animaux , 1960 .

[15]  Stephen Muggleton,et al.  Machine Invention of First Order Predicates by Inverting Resolution , 1988, ML.

[16]  Jeffrey C. Schlimmer Incremental Adjustment of Representations for Learning , 1987 .

[17]  Tom M. Mitchell,et al.  Generalization as Search , 2002 .

[18]  Thomas G. Dietterich Limitations on Inductive Learning , 1989, ML.

[19]  F. M.,et al.  The Concise Oxford Dictionary of Current English , 1929, Nature.

[20]  Saso Dzeroski,et al.  Learning Nonrecursive Definitions of Relations with LINUS , 1991, EWSL.

[21]  Jean-Gabriel Ganascia CHARADE: A Rule System Learning System , 1987, IJCAI.

[22]  J. G. Ganascia,et al.  Deriving the learning bias from rule properties , 1991 .

[23]  Robert C. Holte,et al.  A Mathematical Framework for Studying Representation , 1989, ML.

[24]  Christopher J. Matheus,et al.  The Need for Constructive Induction , 1991, ML.

[25]  D. Benjamin Change of Representation and Inductive Bias , 1989 .

[26]  Thomas G. Dietterich,et al.  Selecting Appropriate Representations for Learning from Examples , 1986, AAAI.

[27]  William W. Cohen An Analysis of Representation Shift in Concept Learning , 1990, ML.

[28]  Saul Amarel,et al.  On representations of problems of reasoning about actions , 1968 .