Backpropagation representation theorem using power series

A representation theorem is developed for backpropagation neural networks. First, it is assumed that the function to be approximated, F(x) for the vector x, is continuous and has finite support, so that it can be approximated arbitrarily well by a multidimensional power series. The activation function, sigmoid or otherwise, is then approximated by a power-series function of the net. Basic building-block subnetworks, realizing the monomial or product of the inputs, are implemented with any desired degree of accuracy. Each term in the power series for F(x) is realizable using a building block, and each building block has one hidden layer. Hence, the overall network has one hidden layer

[1]  H. White,et al.  Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions , 1989, International 1989 Joint Conference on Neural Networks.

[2]  H. White,et al.  There exists a neural network that does not make avoidable mistakes , 1988, IEEE 1988 International Conference on Neural Networks.