Is Pocket algorithm optimal?

The pocket algorithm is considered able to provide for any classification problem the weight vector which satisfies the maximum number of input-output relations contained in the training set. A proper convergence theorem ensures the achievement of an optimal configuration with probability one when the number of iterations grows indefinitely. In the present paper a new formulation of this theorem is given; a rigorous proof corrects some formal and substantial errors which invalidate previous theoretical results. In particular it is shown that the optimality of the asymptotical solution is ensured only if the number of permanences for the pocket vector lies in a proper interval of the real axis which bounds depend on the number of iterations.

[1]  A. J. Mansfield Comparison of perceptron training by linear programming and by the perceptron convergence procedure , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[2]  Marvin Minsky,et al.  Perceptrons: An Introduction to Computational Geometry , 1969 .

[3]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[4]  L. Khachiyan Polynomial algorithms in linear programming , 1980 .

[5]  R. L. Kashyap,et al.  An Algorithm for Linear Inequalities and its Applications , 1965, IEEE Trans. Electron. Comput..

[6]  Marcus Frean,et al.  The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural Networks , 1990, Neural Computation.

[7]  J. Nadal,et al.  Learning in feedforward layered networks: the tiling algorithm , 1989 .

[8]  Marco Muselli On sequential construction of binary neural networks , 1995, IEEE Trans. Neural Networks.

[9]  Ronald L. Rivest,et al.  Training a 3-node neural network is NP-complete , 1988, COLT '88.

[10]  A. A. Mullin,et al.  Principles of neurodynamics , 1962 .

[11]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[12]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..

[13]  Marcus R. Frean,et al.  A "Thermal" Perceptron Learning Rule , 1992, Neural Computation.

[14]  Stephen I. Gallant,et al.  Perceptron-based learning algorithms , 1990, IEEE Trans. Neural Networks.