Pruning Artificial Neural Networks Using Neural Complexity Measures

This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.

[1]  Olaf Sporns,et al.  Connectivity and complexity: the relationship between neuroanatomy and brain dynamics , 2000, Neural Networks.

[2]  Babak Hassibi,et al.  Second Order Derivatives for Network Pruning: Optimal Brain Surgeon , 1992, NIPS.

[3]  L. K. Hansen,et al.  Pruning with generalization based weight saliencies: λOBD, λOBS , 1995, NIPS 1995.

[4]  Michael C. Mozer,et al.  Skeletonization: A Technique for Trimming the Fat from a Network via Relevance Assessment , 1988, NIPS.

[5]  O. Sporns Small-world connectivity, motif composition, and complexity of fractal neuronal connections. , 2006, Bio Systems.

[6]  Yves Chauvin,et al.  A Back-Propagation Algorithm with Optimal Use of Hidden Units , 1988, NIPS.

[7]  Erick Cantú-Paz Pruning Neural Networks with Distribution Estimation Algorithms , 2003, GECCO.

[8]  Javier R. Movellan,et al.  Benefits of gain: speeded learning and minimal hidden layers in back-propagation networks , 1991, IEEE Trans. Syst. Man Cybern..

[9]  S. Stahl,et al.  Book Review: Essential Psychopharmacology: Neuroscientific Basis and Practical Applications, 2nd Edition , 1996 .

[10]  Gregory J. Wolff,et al.  Optimal Brain Surgeon: Extensions and performance comparisons , 1993, NIPS 1993.

[11]  Russell Reed,et al.  Pruning algorithms-a survey , 1993, IEEE Trans. Neural Networks.

[12]  Peter J. B. Hancock,et al.  Pruning Neural Nets by Genetic Algorithm , 1992 .

[13]  Ehud D. Karnin,et al.  A simple procedure for pruning back-propagation trained neural networks , 1990, IEEE Trans. Neural Networks.

[14]  Olaf Sporns,et al.  Evolution of Neural Structure and Complexity in a Computational Ecology , 2006 .

[15]  Barry Haynes,et al.  Complexifying Artificial Neural Networks through Topological Reorganization , 2022 .

[16]  Olaf Sporns,et al.  Evolving Coordinated Behavior by Maximizing Information Structure , 2006 .

[17]  Geoffrey E. Hinton,et al.  Simplifying Neural Networks by Soft Weight-Sharing , 1992, Neural Computation.

[18]  Geoffrey E. Hinton,et al.  Adaptive Mixtures of Local Experts , 1991, Neural Computation.

[19]  M. Lungarella,et al.  Information Self-Structuring: Key Principle for Learning and Development , 2005, Proceedings. The 4nd International Conference on Development and Learning, 2005..

[20]  Lars Kai Hansen,et al.  Recurrent Networks: Second Order Properties and Pruning , 1994, NIPS.

[21]  G. Edelman,et al.  A measure for brain complexity: relating functional segregation and integration in the nervous system. , 1994, Proceedings of the National Academy of Sciences of the United States of America.

[22]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.

[23]  Xin Yao,et al.  A new evolutionary system for evolving artificial neural networks , 1997, IEEE Trans. Neural Networks.

[24]  Daniel W. C. Ho,et al.  A Node Pruning Algorithm Based on Optimal Brain Surgeon for Feedforward Neural Networks , 2006, ISNN.

[25]  S. Smoliar Neural darwinism: The theory of neuronal group selection: Gerald M. Edelman, (Basic Books; New York, 1987); xxii + 371 pages , 1989 .

[26]  D. Floreano,et al.  Evolutionary Robotics: The Biology,Intelligence,and Technology , 2000 .

[27]  R. Setiono,et al.  Effective neural network pruning using cross-validation , 2005, Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005..

[28]  Derek James,et al.  A Comparative Analysis of Simplification and Complexification in the Evolution of Neural Network Topologies , 2004 .

[29]  Martin A. Riedmiller,et al.  Fast Network Pruning and Feature Extraction by using the Unit-OBS Algorithm , 1996, NIPS.

[30]  Martin Schmidt,et al.  Using GA to Train NN Using Sharing and Pruning , 1995, SCAI.

[31]  Peter J. Angeline,et al.  Evolutionary Module Acquisition , 1993 .

[32]  Rudy Setiono,et al.  A Penalty-Function Approach for Pruning Feedforward Neural Networks , 1997, Neural Computation.

[33]  X. Yao Evolving Artiicial Neural Networks , 2007 .

[34]  David E. Rumelhart,et al.  BACK-PROPAGATION, WEIGHT-ELIMINATION AND TIME SERIES PREDICTION , 1991 .

[35]  Simone Fiori,et al.  Novel Nueral Network Feature Selection Procedure by Generalization Maximization with Application to Automatic Robot Guidance , 2002 .

[36]  Risto Miikkulainen,et al.  Continual Coevolution Through Complexification , 2002, GECCO.

[37]  Francesco Piazza,et al.  Dynamic Topology Neural Network , 1990 .

[38]  Steve G. Romaniuk Pruning Divide & Conquer networks , 1993 .

[39]  Peter J. Angeline,et al.  An evolutionary algorithm that constructs recurrent neural networks , 1994, IEEE Trans. Neural Networks.

[40]  J. K. Kruschke,et al.  Improving generalization in backpropagation networks with distributed bottlenecks , 1989, International 1989 Joint Conference on Neural Networks.