Testing the Generalization of Feedforward Neural Networks with Median Neuron Input Function

In this paper, we present a preliminary experimental study of the generalization abilities of feedforward neural networks with median neuron input function (MIF). In these networks, proposed in our previous work, the signals fed to a neuron are not summed but a median of input signals is calculated. The MIF networks were designed to be fault tolerant but we expect them to have also improved generalization ability. Results of first experimental simulations are presented and described in this article. Potentially improved performance of the MIF networks is demonstrated.

[1]  Martin A. Riedmiller,et al.  A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.

[2]  David J. C. MacKay,et al.  A Practical Bayesian Framework for Backpropagation Networks , 1992, Neural Computation.

[3]  Jacek M. Zurada,et al.  Artificial Intelligence and Soft Computing, 10th International Conference, ICAISC 2010, Zakopane, Poland, June 13-17, 2010, Part I , 2010, International Conference on Artificial Intelligence and Soft Computing.

[4]  Lutz Prechelt,et al.  Early Stopping - But When? , 2012, Neural Networks: Tricks of the Trade.

[5]  Wen Wang,et al.  Some Issues About the Generalization of Neural Networks for Time Series Prediction , 2005, ICANN.

[6]  Erkki Oja,et al.  Artificial Neural Networks: Formal Models and Their Applications - ICANN 2005, 15th International Conference, Warsaw, Poland, September 11-15, 2005, Proceedings, Part II , 2005, International Conference on Artificial Neural Networks.

[7]  Peter L. Bartlett,et al.  For Valid Generalization the Size of the Weights is More Important than the Size of the Network , 1996, NIPS.

[8]  Grgoire Montavon,et al.  Neural Networks: Tricks of the Trade , 2012, Lecture Notes in Computer Science.

[9]  Rich Caruana,et al.  Overfitting in Neural Nets: Backpropagation, Conjugate Gradient, and Early Stopping , 2000, NIPS.

[10]  Kadir Liano,et al.  Robust error measure for supervised neural network learning with outliers , 1996, IEEE Trans. Neural Networks.

[11]  Martin T. Hagan,et al.  Neural network design , 1995 .

[12]  Geoffrey E. Hinton,et al.  Bayesian Learning for Neural Networks , 1995 .

[13]  Christopher M. Bishop,et al.  Neural networks for pattern recognition , 1995 .

[14]  David Haussler,et al.  What Size Net Gives Valid Generalization? , 1989, Neural Computation.

[15]  Andrzej Rusiecki,et al.  Fault tolerant feedforward neural network with median neuron input function , 2005 .

[16]  Klaus-Robert Müller,et al.  Asymptotic statistical theory of overtraining and cross-validation , 1997, IEEE Trans. Neural Networks.

[17]  Jacek M. Zurada,et al.  Artificial Intelligence and Soft Computing - ICAISC 2008, 9th International Conference, Zakopane, Poland, June 22-26, 2008, Proceedings , 2008, ICAISC.

[18]  C. Lee Giles,et al.  Overfitting and neural networks: conjugate gradient and backpropagation , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.

[19]  L. Glass,et al.  Oscillation and chaos in physiological control systems. , 1977, Science.

[20]  Mohammad Bagher Menhaj,et al.  Training feedforward networks with the Marquardt algorithm , 1994, IEEE Trans. Neural Networks.

[21]  Andrzej Rusiecki Fast Robust Learning Algorithm Dedicated to LMLS Criterion , 2010, ICAISC.

[22]  Andrzej Rusiecki,et al.  Robust MCD-Based Backpropagation Learning Algorithm , 2006, ICAISC.