Improving MLP Neural Network Performance by Noise Reduction

In this paper we examine several methods for improving the performance of MLP neural networks by eliminating the influence of outliers and compare them experimentally on several classification and regression tasks. The examined method include: pre-training outlier elimination, use of different error measures during network training, replacing the weighted input sum with weighted median in the neuron input functions and various combinations of them. We show how these methods influence the network prediction. Based on the experimental results, we also present a novel hybrid approach improving the network performance.

[1]  Alberto Prieto,et al.  Computational and ambient intelligence , 2009, Neurocomputing.

[2]  Moumen T. El-Melegy,et al.  Random Sampler M-Estimator Algorithm With Sequential Probability Ratio Test for Robust Function Approximation Via Feed-Forward Neural Networks , 2013, IEEE Transactions on Neural Networks and Learning Systems.

[3]  Jacek M. Zurada,et al.  Artificial Intelligence and Soft Computing, 10th International Conference, ICAISC 2010, Zakopane, Poland, June 13-17, 2010, Part I , 2010, International Conference on Artificial Intelligence and Soft Computing.

[4]  Andrzej Rusiecki Robust LTS Backpropagation Learning Algorithm , 2007, IWANN.

[5]  Shun-Feng Su,et al.  The annealing robust backpropagation (ARBP) learning algorithm , 2000, IEEE Trans. Neural Networks Learn. Syst..

[6]  Dorota Burchart-Korol,et al.  Application of Neural Network for the Prediction of Eco-efficiency , 2011, ISNN.

[7]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[8]  Wlodzislaw Duch,et al.  Variable step search algorithm for feedforward networks , 2008, Neurocomputing.

[9]  Moumen T. El-Melegy,et al.  Robust Training of Artificial Feedforward Neural Networks , 2009, Foundations of Computational Intelligence.

[10]  Chang-Tien Lu,et al.  Outlier Detection , 2008, Encyclopedia of GIS.

[11]  Ana González-Marcos,et al.  TAO-robust backpropagation learning algorithm , 2005, Neural Networks.

[12]  Peter E. Hart,et al.  The condensed nearest neighbor rule (Corresp.) , 1968, IEEE Trans. Inf. Theory.

[13]  Martin A. Riedmiller,et al.  A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.

[14]  John Law,et al.  Robust Statistics—The Approach Based on Influence Functions , 1986 .

[15]  J. Tolvi,et al.  Genetic algorithms for outlier detection and variable selection in linear regression models , 2004, Soft Comput..

[16]  Andrzej Rusiecki,et al.  Fault tolerant feedforward neural network with median neuron input function , 2005 .

[17]  David J. Olive,et al.  Robustifying Robust Estimators , 2007 .

[18]  Christopher K. I. Williams,et al.  Transformation Equivariant Boltzmann Machines , 2011, ICANN.

[19]  Witold Pedrycz,et al.  Foundations of Computational Intelligence, Volume 1: Learning and Approximation , 2009, Foundations of Computational Intelligence.

[20]  Moumen T. El-Melegy,et al.  Random sampler M-estimator algorithm for robust function approximation via feed-forward neural networks , 2011, The 2011 International Joint Conference on Neural Networks.

[21]  Moumen T. El-Melegy,et al.  RANSAC algorithm with sequential probability ratio test for robust training of feed-forward neural networks , 2011, The 2011 International Joint Conference on Neural Networks.

[22]  Jianping Zhang,et al.  Intelligent Selection of Instances for Prediction Functions in Lazy Learning Algorithms , 1997, Artificial Intelligence Review.

[23]  Kadir Liano,et al.  Robust error measure for supervised neural network learning with outliers , 1996, IEEE Trans. Neural Networks.

[24]  Jean-Paul Chilès,et al.  Wiley Series in Probability and Statistics , 2012 .

[25]  Haibo He,et al.  Advances in Neural Networks - ISNN 2011 - 8th International Symposium on Neural Networks, ISNN 2011, Guilin, China, May 29-June 1, 2011, Proceedings, Part I , 2011, International Symposium on Neural Networks.

[26]  Ginés Rubio,et al.  Applying Mutual Information for Prototype or Instance Selection in Regression Problems , 2009, ESANN.

[27]  Marcin Blachnik,et al.  Instance Selection in Logical Rule Extraction for Regression Problems , 2013, ICAISC.

[28]  Marcin Blachnik,et al.  Temperature Prediction in Electric Arc Furnace with Neural Network Tree , 2011, ICANN.

[29]  Francisco Herrera,et al.  Prototype Selection for Nearest Neighbor Classification: Taxonomy and Empirical Study , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[30]  Andrzej Rusiecki Robust Learning Algorithm Based on Iterative Least Median of Squares , 2012, Neural Processing Letters.

[31]  Ramesh C. Jain,et al.  A robust backpropagation learning algorithm for function approximation , 1994, IEEE Trans. Neural Networks.

[32]  Dennis L. Wilson,et al.  Asymptotic Properties of Nearest Neighbor Rules Using Edited Data , 1972, IEEE Trans. Syst. Man Cybern..