Neural networks and nonlinear optimization i the representation of continuous functions

In this paper we introduce the concept of representing the objective function of an unconstrained optimisation problem by a neural network. We briefly discuss the learning problem that has to be solved for such a representation to be possible and illustrate that solving the learning problem by placing a truncated Gauss Newton code in a multistart context may be much more efficient than using back propagation. We then review the known theorems that guarantee that a neural network exists that represents the objective function and present a methodology for determining the number of data points needed for such a representation to be possible.