Fast unit selection algorithm for neural network design

In this paper a fast neural network pruning algorithm is presented which is based on an analysis of the weights in a trained network. We demonstrate that this technique selects a lean architecture whilst experiencing no corresponding degradation in performance. Our unit selection algorithm is compared to a state of the art network pruning algorithm taken from the literature and is found to offer several advantages, i.e., its simplicity, its speed and the ability to select the leanest architecture.

[1]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.

[2]  Josef Kittler,et al.  Floating search methods in feature selection , 1994, Pattern Recognit. Lett..

[3]  Christopher M. Bishop,et al.  Neural networks for pattern recognition , 1995 .

[4]  Michael C. Mozer,et al.  Skeletonization: A Technique for Trimming the Fat from a Network via Relevance Assessment , 1988, NIPS.

[5]  Rudy Setiono,et al.  A Penalty-Function Approach for Pruning Feedforward Neural Networks , 1997, Neural Computation.

[6]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[7]  Wright-Patterson Afb,et al.  Feature Selection Using a Multilayer Perceptron , 1990 .

[8]  Josef Kittler,et al.  Choosing an Optimal Neural Network Size to aid a Search Through a Large Image Database , 1998, BMVC.

[9]  Huan Liu,et al.  Neural-network feature selector , 1997, IEEE Trans. Neural Networks.

[10]  Anil K. Jain,et al.  Parsimonious network design and feature selection through node pruning , 1994, Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 3 - Conference C: Signal Processing (Cat. No.94CH3440-5).

[11]  Russell Reed,et al.  Pruning algorithms-a survey , 1993, IEEE Trans. Neural Networks.