LEARNING ATTRIBUTIONAL TWO-TIERED DESCRIPTIONS OF FLEXIBLE CONCEPTS

Most real-life concepts are flexible • that is they lack precise defInitions and are context dependent. Representing and learning flexible concepts presents a fundamental challenge for ArtifIcial Intelligence. This paper describes a method for learning such concepts, which is based on a two­ tiered concept representation. In such a representation the flI'St tier, called the Base Concept Representation (BCR), describes the most relevant properties of a concept in an explicit, comprehensible, and efficient form. The second tier, called the Inferential Concept Interpreration (ICI), contains procedures for matching instances with concept descriptions. and inference rules defining allowable transformations of the concept under different contexts and exceptional cases. In the method. the BCR is obtained by first creating a complete and consistent concept description, and then optimizing it according to a general description quality criterion. The complete and consistent description is obtained by applying the AQ inductive learning methodology. The optimization process is done by a double level best fJrSt search. The lei is defmed in pan by a method of flexible matching and in pan by a set of inference rules. The method has been implemented in the AQTI-15 learning system, and experimental results show that such a two­ tiered concept representation not only produces simpler concept descriptions, but may also increase their predictive power.

[1]  Tom Michael Mitchell Version spaces: an approach to concept learning. , 1979 .

[2]  Yoram Reich,et al.  Exemplar-based knowledge acquisition , 2004, Machine Learning.

[3]  Tom M. Mitchell,et al.  Explanation-Based Generalization: A Unifying View , 1986, Machine Learning.

[4]  Ivan Bratko,et al.  ASSISTANT 86: A Knowledge-Elicitation Tool for Sophisticated Users , 1987, EWSL.

[5]  Douglas H. Fisher,et al.  Concept Simplification and Prediction Accuracy , 1988, ML.

[6]  Thomas G. Dietterich Learning at the knowledge level , 2004, Machine Learning.

[7]  Kristian J. Hammond,et al.  Case-Based Planning: Viewing Planning as a Memory Task , 1989 .

[8]  Smadar T. Kedar-Cabelli,et al.  Explanation-Based Generalization as Resolution Theorem Proving , 1987 .

[9]  Raymond J. Mooney,et al.  Induction Over the Unexplained: Integrated Learning of Concepts with Both Explainable and Conventional Aspects , 1989, ML.

[10]  Ryszard S. Michalski,et al.  Two-tiered concept meaning, inferential matching, and conceptual cohesiveness , 1989 .

[11]  Stan Matwin,et al.  Measuring Quality of Concept Descriptions , 1988, EWSL.

[12]  Ryszard S. Michalski,et al.  Rule Optimization Via SG-TRUNC Method , 1989 .

[13]  Marvin Minsky,et al.  A framework for representing knowledge , 1974 .

[14]  Peter Clark,et al.  The CN2 Induction Algorithm , 1989, Machine Learning.

[15]  Allan Collins,et al.  Experiments on semantic memory and language comprehension. , 1972 .

[16]  Nada Lavrac,et al.  The Multi-Purpose Incremental Learning System AQ15 and Its Testing Application to Three Medical Domains , 1986, AAAI.

[17]  R. Mooney,et al.  Explanation-Based Learning: An Alternative View , 1986, Machine Learning.

[18]  Ryszard S. Michalski,et al.  On the Nature of Explanation or Why Did the Wine Bottle Shatter , 1988 .

[19]  E. Rosch,et al.  Family resemblances: Studies in the internal structure of categories , 1975, Cognitive Psychology.

[20]  James Kelly,et al.  AutoClass: A Bayesian Classification System , 1993, ML.

[21]  Pat Langley,et al.  Trading Off Simplicity and Coverage in Incremental concept Learning , 1988, ML.

[22]  S. Weber A general concept of fuzzy connectives, negations and implications based on t-norms and t-conorms , 1983 .