Inductive learning using generalized distance measures

This paper briefly reviews the two currently dominant paradigms in machine learning--the connectionist network (CN) models and symbol processing (SP) systems; argues for the centrality of knowledge representation frameworks in learning; examines a range of representations in increasing order of complexity and measures of similarity or distance that are appropriate for each of them; introduces the notion of a generalized distance measure (GDM) and presents a class of GDM-based inductive learning algorithms (GDML). GDML are motivated by the need for an integration of symbol processing (SP) and connectionist network (CN) approaches to machine learning. GDM offer a natural generalization of the notion of distance or measure of mismatch used in a variety of pattern recognition techniques (e.g., k-nearest neighbor classifiers, neural networks using radial basis functions, and so on) to a range of structured representations such strings, trees, pyramids, association nets, conceptual graphs, etc. which include those used in computer vision and syntactic approaches to pattern recognition. GDML are a natural extension of generative or constructive learning algorithms for neural networks that enable an adaptive and parsimonious determination of the network topology as well as the desired weights as a function of learning Applications of GDML include tasks such as planning, concept learning, and 2- and 3-dimensional object recognition. GDML offer a basis for a natural integration of SP and CN approaches to the construction of intelligent systems that perceive, learn, and act.

[1]  Michael J. Fischer,et al.  The String-to-String Correction Problem , 1974, JACM.

[2]  Robert M. Haralick,et al.  A Metric for Comparing Relational Descriptions , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Shin-Yee Lu A Tree-to-Tree Distance and Its Application to Cluster Analysis , 1979, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[5]  Vasant Honavar,et al.  Brain-structured Connectionist Networks that Perceive and Learn , 1989 .

[6]  Vasant Honavar,et al.  Generative learning structures and processes for generalized connectionist networks , 1993, Inf. Sci..

[7]  Pat Langley,et al.  Machine learning as an experimental science , 2004, Machine Learning.

[8]  King-Sun Fu,et al.  Error-Correcting Isomorphisms of Attributed Relational Graphs for Pattern Analysis , 1979, IEEE Transactions on Systems, Man, and Cybernetics.

[9]  Bruce Porter,et al.  Protos: a unified approach to concept representation, classification, and learning , 1988 .

[10]  Stanley M. Selkow,et al.  The Tree-to-Tree Editing Problem , 1977, Inf. Process. Lett..

[11]  Marvin Minsky,et al.  Perceptrons: An Introduction to Computational Geometry , 1969 .

[12]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[13]  Translator-IEEE Expert staff Machine Learning: A Theoretical Approach , 1992, IEEE Expert.

[14]  Vladimir I. Levenshtein,et al.  Binary codes capable of correcting deletions, insertions, and reversals , 1965 .

[15]  Lev Goldfarb,et al.  On the foundations of intelligent processes - I. An evolving model for pattern learning , 1990, Pattern Recognit..

[16]  Geoffrey E. Hinton Connectionist Learning Procedures , 1989, Artif. Intell..