Pruning Divide & Conquer networks

Determining an effective architecture for multi-layer feedforward backpropagation neural networks can be a time-consuming effort. In general it requires human intervention in determining the number of layers, number of hidden cells, the learning rule and the learning parameters. Over the past few years several approaches to dynamically configure neural networks have been proposed, which remove most of the responsibility for choosing the correct network configuration from the user. As important as finding a viable network architecture for some given learning problem, is the need to obtain a minimal configuration. The total time required to emulate or simulate neural networks is largely dependent on the number of connections present in a network. Therefore, it is essential to provide pruning methods to reduce network complexity. In this paper two approaches to network pruning will be investigated: single- and multi-pass pruning. Their effectiveness is emphasized by applying them to several real world proble...

[1]  Dean Allemang,et al.  Connectionism and Information Processing Abstractions , 1988, AI Mag..

[2]  Dennis Connolly,et al.  Self organizing modular neural networks , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[3]  T. Ash,et al.  Dynamic node creation in backpropagation networks , 1989, International 1989 Joint Conference on Neural Networks.

[4]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.

[5]  Lawrence O. Hall,et al.  A Hybrid Connectionist, Symbolic Learning System , 1990, AAAI.

[6]  Babak Hassibi,et al.  Second Order Derivatives for Network Pruning: Optimal Brain Surgeon , 1992, NIPS.

[7]  David M. Skapura,et al.  Neural networks - algorithms, applications, and programming techniques , 1991, Computation and neural systems series.

[8]  Yoshio Hirose,et al.  Backpropagation algorithm which varies the number of hidden units , 1989, International 1989 Joint Conference on Neural Networks.

[9]  Marcus Frean,et al.  The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural Networks , 1990, Neural Computation.

[10]  Michael C. Mozer,et al.  Skeletonization: A Technique for Trimming the Fat from a Network via Relevance Assessment , 1988, NIPS.

[11]  John M. Zelle,et al.  Growing layers of perceptrons: introducing the Extentron algorithm , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[12]  Sebastian Thrun,et al.  The MONK''s Problems-A Performance Comparison of Different Learning Algorithms, CMU-CS-91-197, Sch , 1991 .

[13]  Thomas Jackson,et al.  Neural Computing - An Introduction , 1990 .

[14]  Terence D. Sanger,et al.  A tree-structured adaptive network for function approximation in high-dimensional spaces , 1991, IEEE Trans. Neural Networks.

[15]  Christian Lebiere,et al.  The Cascade-Correlation Learning Architecture , 1989, NIPS.

[16]  Vladimir Vapnik,et al.  Chervonenkis: On the uniform convergence of relative frequencies of events to their probabilities , 1971 .

[17]  Vladimir Vapnik,et al.  Inductive principles of the search for empirical dependences (methods based on weak convergence of probability measures) , 1989, COLT '89.