Comparing neural network approximations for different functional forms

This paper examines the capacity of feedforward neural networks (NNs) to approximate certain functional forms. Its purpose is to show that the theoretical property of ‘universal approximation’, which provides the basic rationale behind the NN approach, should not be interpreted too literally. The most important issue considered involves the number of hidden layers in the network. We show that for a number of interesting functional forms better generalization is possible with more than one hidden layer, despite theoretical results to the contrary. Our experiments constitute a useful set of counter-examples.