Perceptron-based learning algorithms

A key task for connectionist research is the development and analysis of learning algorithms. An examination is made of several supervised learning algorithms for single-cell and network models. The heart of these algorithms is the pocket algorithm, a modification of perceptron learning that makes perceptron learning well-behaved with nonseparable training data, even if the data are noisy and contradictory. Features of these algorithms include speed algorithms fast enough to handle large sets of training data; network scaling properties, i.e. network methods scale up almost as well as single-cell models when the number of inputs is increased; analytic tractability, i.e. upper bounds on classification error are derivable; online learning, i.e. some variants can learn continually, without referring to previous data; and winner-take-all groups or choice groups, i.e. algorithms can be adapted to select one out of a number of possible classifications. These learning algorithms are suitable for applications in machine learning, pattern recognition, and connectionist expert systems.

[1]  Thomas M. Cover,et al.  Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition , 1965, IEEE Trans. Electron. Comput..

[2]  S. Levin,et al.  On the boundedness of an iterative procedure for solving a system of linear inequalities , 1970 .

[3]  Shun-ichi Amari,et al.  Learning Patterns and Pattern Sequences by Self-Organizing Nets of Threshold Elements , 1972, IEEE Transactions on Computers.

[4]  Kaoru Nakano,et al.  Associatron-A Model of Associative Memory , 1972, IEEE Trans. Syst. Man Cybern..

[5]  Teuvo Kohonen,et al.  Correlation Matrix Memories , 1972, IEEE Transactions on Computers.

[6]  P. Werbos,et al.  Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .

[7]  Stephen A. Ritz,et al.  Distinctive features, categorical perception, and probability learning: some applications of a neural model , 1977 .

[8]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[9]  J. Ross Quinlan,et al.  Learning Efficient Classification Procedures and Their Application to Chess End Games , 1983 .

[10]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[11]  N. Littlestone Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm , 1987, 28th Annual Symposium on Foundations of Computer Science (sfcs 1987).

[12]  Paul E. Utgoff,et al.  Perceptron Trees : A Case Study in ybrid Concept epresentations , 1999 .

[13]  S. Grossberg Neural Networks and Natural Intelligence , 1988 .

[14]  Stephen I. Gallant,et al.  Connectionist expert systems , 1988, CACM.

[15]  David Haussler,et al.  What Size Net Gives Valid Generalization? , 1989, Neural Computation.

[16]  S. Renals,et al.  Phoneme classification experiments using radial basis functions , 1989, International 1989 Joint Conference on Neural Networks.

[17]  Raymond J. Mooney,et al.  An Experimental Comparison of Symbolic and Connectionist Learning Algorithms , 1989, IJCAI.

[18]  Wray L. Buntine A Critique of the Valiant Model , 1989, IJCAI.

[19]  Wray L. Buntine Inductive knowledge acquisition and induction methodologies , 1989, Knowl. Based Syst..

[20]  Y. C. Lee,et al.  Hand written letter recognition with neural networks , 1989, International 1989 Joint Conference on Neural Networks.

[21]  Geoffrey E. Hinton Connectionist Learning Procedures , 1989, Artif. Intell..

[22]  J. Nadal,et al.  Learning in feedforward layered networks: the tiling algorithm , 1989 .

[23]  S. Ohhashi,et al.  Alphanumeric character recognition using a connectionist model with the pocket algorithm , 1989, International 1989 Joint Conference on Neural Networks.

[24]  Geoffrey E. Hinton,et al.  Distributed Representations , 1986, The Philosophy of Artificial Intelligence.

[25]  Stephen I. Gallant,et al.  A connectionist learning algorithm with provable generalization and scaling bounds , 1990, Neural Networks.

[26]  Shun-ichi Amari,et al.  Mathematical foundations of neurocomputing , 1990, Proc. IEEE.