Convergence properties and stationary points of a perceptron learning algorithm

An analysis of the stationary (convergence) points of an adaptive algorithm that adjusts the perceptron weights is presented. This algorithm is identical in form to the least-mean-square (LMS) algorithm, except that a hard limiter is incorporated at the output of the summer. The algorithm is described in detail, a simple two-input example is presented, and some of its convergence properties are illustrated. When the input of the perceptron is a Gaussian random vector, the stationary points of the algorithm are not unique and they depend on the algorithm step size and the momentum constant. The stationary points of the algorithm are presented, and the properties of the adaptive weight vector near convergence are discussed. Computer simulations that verify the analysis are given. >

[1]  John J. Shynk,et al.  Analysis of a perceptron learning algorithm with momentum updating , 1990, International Conference on Acoustics, Speech, and Signal Processing.

[2]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[3]  J. Shynk,et al.  The LMS algorithm with momentum updating , 1988, 1988., IEEE International Symposium on Circuits and Systems.

[4]  Robert Price,et al.  A useful theorem for nonlinear devices having Gaussian inputs , 1958, IRE Trans. Inf. Theory.

[5]  R. Lippmann,et al.  An introduction to computing with neural nets , 1987, IEEE ASSP Magazine.

[6]  John J. Shynk,et al.  Analysis of the momentum LMS algorithm , 1990, IEEE Trans. Acoust. Speech Signal Process..

[7]  Bernard Widrow,et al.  Adaptive Signal Processing , 1985 .

[8]  Bernard Widrow,et al.  Layered neural nets for pattern recognition , 1988, IEEE Trans. Acoust. Speech Signal Process..

[9]  Richard P. Lippmann,et al.  An introduction to computing with neural nets , 1987 .

[10]  G. O. Stone,et al.  An analysis of the delta rule and the learning of statistical associations , 1986 .