A Simple Neural Network Pruning Algorithm with Application to Filter Synthesis

This paper describes an approach to synthesizing desired filters using a multilayer neural network (NN). In order to acquire the right function of the object filter, a simple method for reducing the structures of both the input and the hidden layers of the NN is proposed. In the proposed method, the units are removed from the NN on the basis of the influence of removing each unit on the error, and the NN is retrained to recover the damage of the removal. Each process is performed alternately, and then the structure is reduced. Experiments to synthesize a known filter were performed. By the analysis of the NN obtained by the proposed method, it has been shown that it acquires the right function of the object filter. By the experiment to synthesize the filter for solving real signal processing tasks, it has been shown that the NN obtained by the proposed method is superior to that obtained by the conventional method in terms of the filter performance and the computational cost.

[1]  Masumi Ishikawa,et al.  Structural learning with forgetting , 1996, Neural Networks.

[2]  Jaakko Astola,et al.  Adaptive multistage weighted order statistic filters based on the backpropagation algorithm , 1994, IEEE Trans. Signal Process..

[3]  Giovanna Castellano,et al.  An iterative pruning algorithm for feedforward neural networks , 1997, IEEE Trans. Neural Networks.

[4]  Masafumi Hagiwara Novel backpropagation algorithm for reduction of hidden units and acceleration of convergence using artificial selection , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[5]  S. Amari,et al.  Network Information Criterion | Determining the Number of Hidden Units for an Articial Neural Network Model Network Information Criterion | Determining the Number of Hidden Units for an Articial Neural Network Model , 2007 .

[6]  Jocelyn Sietsma,et al.  Creating artificial neural networks that generalize , 1991, Neural Networks.

[7]  Dana H. Brooks,et al.  Electrical imaging of the heart , 1997, IEEE Signal Process. Mag..

[8]  Masumi Ishikawa,et al.  Structural Learning with Forgetting of Neural Networks , 1997 .

[9]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.

[10]  Chuanyi Ji,et al.  Generalizing Smoothness Constraints from Discrete Samples , 1990, Neural Computation.

[11]  Geoffrey E. Hinton,et al.  Simplifying Neural Networks by Soft Weight-Sharing , 1992, Neural Computation.

[12]  Nirwan Ansari,et al.  Structure and properties of generalized adaptive neural filters for signal enhancement , 1996, IEEE Trans. Neural Networks.

[13]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[14]  Nirwan Ansari,et al.  Comparative study of the generalized adaptive neural filter with other nonlinear filters , 1993, 1993 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[15]  Jaakko Astola,et al.  A new class of nonlinear filters-neural filters , 1993, IEEE Trans. Signal Process..

[16]  D. Rumelhart,et al.  Generalization by weight-elimination applied to currency exchange rate prediction , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[17]  Kaoru Arakawa,et al.  A nonlinear digital filter using multi-layered neural networks , 1990, IEEE International Conference on Communications, Including Supercomm Technical Sessions.

[18]  Keisuke Kameyama,et al.  Neural network pruning by fusing hidden layer units , 1991 .