This paper deals with a new approach to detect the structure (i.e. determination of the number of hidden units) of a feedforward neural network (FNN). This approach is based on the principle that any FNN could be represented by a Volterra series such as a nonlinear input-output model. The new proposed algorithm is based on the following three steps: first, we develop the nonlinear activation function of the hidden layer's neurons in a Taylor expansion, secondly we express the neural network output as a NARX (nonlinear autoregressive with exogenous input) model and finally, by appropriately using the nonlinear order selection algorithm proposed by Kortmann-Unbehauen (1988), we select the most relevant signals on the NARX model obtained. Starting from the output layer, this pruning procedure is performed on each node in each layer. Using this new algorithm with the standard backpropagation (SBP) and over various initial conditions, we perform Monte Carlo experiments leading to a drastic reduction in the nonsignificant network hidden layer neurons.
[1]
Vasilis Z. Marmarelis,et al.
Volterra models and three-layer perceptrons
,
1997,
IEEE Trans. Neural Networks.
[2]
Marie Cottrell,et al.
Neural modeling for time series: A statistical stepwise method for weight elimination
,
1995,
IEEE Trans. Neural Networks.
[3]
Shun-ichi Amari,et al.
Network information criterion-determining the number of hidden units for an artificial neural network model
,
1994,
IEEE Trans. Neural Networks.
[4]
Giovanna Castellano,et al.
An iterative pruning algorithm for feedforward neural networks
,
1997,
IEEE Trans. Neural Networks.
[5]
Lennart Ljung,et al.
System Identification: Theory for the User
,
1987
.