Learning and associative memory

Models for learning have been widely developed in the literature and applied to many problems, such as: classification, pattern recognition, data bases or vision [6, 18, 26]…. They are based on methods related to those used in other domains: statistical methods, linear classifiers, clustering, relaxation (see Devijver, Jain, Kittler in this volume)…. Recently, more models have been designed, called “connectionist”, which originated in the work on linear classifiers. These models make use of networks of interconnected elements (automata) and provide a general framework for distributed representation of knowledge (in the connections) [7], distributed processing, learning and restoration. Originally designed as models for the brain, these models include today many domains traditionally part of artificial intelligence: vision [3,4], speech and language processing [23], games, semantic networks [13], cognitive science [20]…. Probabilistic models have also been developed along similar lines to solve problems in pattern recognition: restoration of images [10], figure-ground separation [22], translation invariance [16,17,21]….

[1]  Geoffrey E. Hinton,et al.  A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..

[2]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory, Third Edition , 1989, Springer Series in Information Sciences.

[3]  James A. Anderson Cognitive Capabilities of a Parallel System , 1986 .

[4]  Y. L. Cun Learning Process in an Asymmetric Threshold Network , 1986 .

[5]  Geoffrey E. Hinton,et al.  Parallel Models of Associative Memory , 1989 .

[6]  Sylvie Thiria,et al.  Automata networks and artificial intelligence , 1987 .

[7]  Françoise Fogelman-Soulié,et al.  Disordered Systems and Biological Organization , 1986, NATO ASI Series.

[8]  Richard O. Duda,et al.  Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.

[9]  Eric Goles Ch.,et al.  Decreasing energy functions as a tool for studying threshold networks , 1985, Discret. Appl. Math..

[10]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[11]  Geoffrey E. Hinton,et al.  Parallel visual computation , 1983, Nature.

[12]  Gérard Weisbuch,et al.  Scaling laws for the attractors of Hopfield networks , 1985 .

[13]  Dana H. Ballard,et al.  Parameter Nets , 1984, Artif. Intell..

[14]  Geoffrey E. Hinton,et al.  Learning symmetry groups with hidden units: beyond the perceptron , 1986 .

[15]  Bernard Widrow,et al.  Adaptive switching circuits , 1988 .

[16]  A. Doma Generalized Inverses of Matrices and Its Applications. , 1983 .

[17]  T. Greville,et al.  Some Applications of the Pseudoinverse of a Matrix , 1960 .

[18]  Gérard Weisbuch,et al.  Random Iterations of Threshold Networks and Associative Memory , 1987, SIAM J. Comput..

[19]  Terrence J. Sejnowski,et al.  NETtalk: a parallel network that learns to read aloud , 1988 .

[20]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[21]  Donald Geman,et al.  Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[22]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .