A new feedforward neural network hidden layer neuron pruning algorithm

This paper deals with a new approach to detect the structure (i.e. determination of the number of hidden units) of a feedforward neural network (FNN). This approach is based on the principle that any FNN could be represented by a Volterra series such as a nonlinear input-output model. The new proposed algorithm is based on the following three steps: first, we develop the nonlinear activation function of the hidden layer's neurons in a Taylor expansion, secondly we express the neural network output as a NARX (nonlinear autoregressive with exogenous input) model and finally, by appropriately using the nonlinear order selection algorithm proposed by Kortmann-Unbehauen (1988), we select the most relevant signals on the NARX model obtained. Starting from the output layer, this pruning procedure is performed on each node in each layer. Using this new algorithm with the standard backpropagation (SBP) and over various initial conditions, we perform Monte Carlo experiments leading to a drastic reduction in the nonsignificant network hidden layer neurons.