Conventional modeling of the multilayer perceptron using polynomial basis functions

A technique for modeling the multilayer perceptron (MLP) neural network, in which input and hidden units are represented by polynomial basis functions (PBFs), is presented. The MLP output is expressed as a linear combination of the PBFs and can therefore be expressed as a polynomial function of its inputs. Thus, the MLP is isomorphic to conventional polynomial discriminant classifiers or Volterra filters. The modeling technique was successfully applied to several trained MLP networks.

[1]  G. Govind,et al.  Multi-layered neural networks and Volterra series: The missing link , 1990, 1990 IEEE International Conference on Systems Engineering.

[2]  Michael T. Manry,et al.  Shape recognition with nearest neighbor isomorphic network , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[3]  Nils J. Nilsson,et al.  The Mathematical Foundations of Learning Machines , 1990 .

[4]  Michael T. Manry,et al.  Power series analyses of back-propagation neural networks , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[5]  Donald F. Specht,et al.  Probabilistic neural networks and the polynomial Adaline as complementary techniques for classification , 1990, IEEE Trans. Neural Networks.

[6]  Edward J. Powers,et al.  A digital method of modeling quadratically nonlinear systems with a general random input , 1988, IEEE Trans. Acoust. Speech Signal Process..

[7]  Patrick Gallinari,et al.  Multilayer perceptrons and data analysis , 1988, IEEE 1988 International Conference on Neural Networks.

[8]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[9]  Bruce W. Suter,et al.  The multilayer perceptron as an approximation to a Bayes optimal discriminant function , 1990, IEEE Trans. Neural Networks.

[10]  Robert P. W. Duin,et al.  A non-iterative method for training feedforward networks , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[11]  Eric A. Wan,et al.  Neural network classification: a Bayesian interpretation , 1990, IEEE Trans. Neural Networks.

[12]  Michael T. Manry,et al.  Backpropagation representation theorem using power series , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[13]  G. W. Davis,et al.  ANN modeling of Volterra systems , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[14]  A. WanE. Neural network classification , 1990 .