Robust error measure for supervised neural network learning with outliers
暂无分享,去创建一个
Most supervised neural networks (NNs) are trained by minimizing the mean squared error (MSE) of the training set. In the presence of outliers, the resulting NN model can differ significantly from the underlying system that generates the data. Two different approaches are used to study the mechanism by which outliers affect the resulting models: influence function and maximum likelihood. The mean log squared error (MLSE) is proposed as the error criteria that can be easily adapted by most supervised learning algorithms. Simulation results indicate that the proposed method is robust against outliers.
[1] John Moody,et al. Fast Learning in Networks of Locally-Tuned Processing Units , 1989, Neural Computation.
[2] D. Berry,et al. Statistics: Theory and Methods , 1990 .
[3] Tom Tollenaere,et al. SuperSAB: Fast adaptive back propagation with good scaling properties , 1990, Neural Networks.
[4] Peter J. Rousseeuw,et al. Robust regression and outlier detection , 1987 .
[5] John C. Platt. A Resource-Allocating Network for Function Interpolation , 1991, Neural Computation.