The Minimum Number of Errors in the N-Parity and its Solution with an Incremental Neural Network

The N-dimensional parity problem is frequently a difficult classification task for Neural Networks. We found an expression for the minimum number of errors νf as function of N for this problem, performed by a perceptron. We verified this quantity experimentally for N=1,...,15 using an optimal train perceptron. With a constructive approach we solved the full N-dimensional parity problem using a minimal feedforward neural network with a single hidden layer of h=N units.

[1]  E. Gardner,et al.  Maximum Storage Capacity in Neural Networks , 1987 .

[2]  Martina Hasenjäger,et al.  Perceptron Learning Revisited: The Sonar Targets Problem , 2004, Neural Processing Letters.

[3]  Stavros J. Perantonis,et al.  Input Feature Extraction for Multilayered Perceptrons Using Supervised Principal Component Analysis , 1999, Neural Processing Letters.

[4]  Aapo Hyvärinen,et al.  The Fixed-Point Algorithm and Maximum Likelihood Estimation for Independent Component Analysis , 1999, Neural Processing Letters.

[5]  Sung-Kwon Park,et al.  The geometrical learning of binary neural networks , 1995, IEEE Trans. Neural Networks.

[6]  Michael R. Berthold,et al.  A probabilistic extension for the DDA algorithm , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).

[7]  Thierry Denoeux,et al.  Performance analysis of a MLP weight initialization algorithm , 1995, ESANN.

[8]  Juan-Manuel Torres-Moreno,et al.  Efficient Adaptive Learning for Classification Tasks with Binary Units , 1998, Neural Computation.

[9]  M. B. Gordon A convergence theorem for incremental learning with real-valued inputs , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).

[10]  Opper,et al.  Tilinglike learning in the parity machine. , 1991, Physical review. A, Atomic, molecular, and optical physics.

[11]  D. Martinez,et al.  The Offset Algorithm: Building and Learning Method for Multilayer Neural Networks , 1992 .

[12]  Juan-Manuel Torres-Moreno,et al.  An evolutive architecture coupled with optimal perceptron learning for classification , 1995, ESANN.

[13]  P. Peretto An introduction to the modeling of neural networks , 1992 .

[14]  Gerald Sommer,et al.  Dynamic Cell Structures , 1994, NIPS.

[15]  Terrence J. Sejnowski,et al.  Analysis of hidden units in a layered network trained to classify sonar targets , 1988, Neural Networks.

[16]  Thomas M. Cover,et al.  Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition , 1965, IEEE Trans. Electron. Comput..

[17]  Michael R. Berthold,et al.  Boosting the Performance of RBF Networks with Dynamic Decay Adjustment , 1994, NIPS.

[18]  Juan-Manuel Torres-Moreno,et al.  Characterization of the Sonar Signals Benchmark , 1998, Neural Processing Letters.

[19]  Bruno Raffin,et al.  Learning and Generalization with Minimerror, A Temperature-Dependent Learning Algorithm , 1995, Neural Computation.

[20]  B. Chakraborty,et al.  Fractal connection structure: effect on generalization in supervised feed-forward networks , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).

[21]  Mirta B. Gordon,et al.  Minimerror: a perceptron learning rule that finds the optimal weights , 1993, The European Symposium on Artificial Neural Networks.