Simple unit-pruning with gain-changing training

In this paper, a novel scheme for pruning the units within a neural network is proposed. The proposed scheme consists of a simple unit-pruning algorithm augmented by a new training algorithm called gain-changing training. In the gain-changing training, the gain of each unit is changed in order that the functions are concentrated on fewer units, i.e., some units play important roles and others negligible roles. Experiments with neural filters (NFs) to reduce noise from natural and medical images were performed. The experimental results demonstrated that the performance of the proposed scheme is superior to those of the conventional methods including the optimal brain damage method (OBD): the proposed scheme resulted in smaller networks; the NFs obtained by the proposed scheme achieved higher performance and generalization ability.

[1]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[2]  Masumi Ishikawa,et al.  Structural learning with forgetting , 1996, Neural Networks.

[3]  D. Rumelhart,et al.  Generalization by weight-elimination applied to currency exchange rate prediction , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[4]  Kenji Suzuki,et al.  Designing the optimal structure of a neural filter , 1998, Neural Networks for Signal Processing VIII. Proceedings of the 1998 IEEE Signal Processing Society Workshop (Cat. No.98TH8378).

[5]  Geoffrey E. Hinton,et al.  Simplifying Neural Networks by Soft Weight-Sharing , 1992, Neural Computation.

[6]  S. Amari,et al.  Network Information Criterion | Determining the Number of Hidden Units for an Articial Neural Network Model Network Information Criterion | Determining the Number of Hidden Units for an Articial Neural Network Model , 2007 .

[7]  Masafumi Hagiwara Novel backpropagation algorithm for reduction of hidden units and acceleration of convergence using artificial selection , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[8]  Jocelyn Sietsma,et al.  Creating artificial neural networks that generalize , 1991, Neural Networks.

[9]  Aggelos K. Katsaggelos,et al.  Noise reduction filters for dynamic image sequences: a review , 1995, Proc. IEEE.

[10]  Chuanyi Ji,et al.  Generalizing Smoothness Constraints from Discrete Samples , 1990, Neural Computation.

[11]  Jaakko Astola,et al.  A new class of nonlinear filters-neural filters , 1993, IEEE Trans. Signal Process..

[12]  D. B. Fogel,et al.  AN INFORMATION CRITERION FOR OPTIMAL NEURAL NETWORK SELECTION , 1990, 1990 Conference Record Twenty-Fourth Asilomar Conference on Signals, Systems and Computers, 1990..

[13]  Yves Chauvin,et al.  A Back-Propagation Algorithm with Optimal Use of Hidden Units , 1988, NIPS.

[14]  Isao Horiba,et al.  Efficient approximation of a neural filter for quantum noise removal in X-ray images , 1999, Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468).

[15]  Giovanna Castellano,et al.  An iterative pruning algorithm for feedforward neural networks , 1997, IEEE Trans. Neural Networks.

[16]  Jaakko Astola,et al.  Adaptive multistage weighted order statistic filters based on the backpropagation algorithm , 1994, IEEE Trans. Signal Process..

[17]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.