Efficient learning of continuous neural networks

We describe an efficient algorithm for learning from examples a class of feedforward neural networks with real inputs and outputs in a real-value generalization of the Probably Approximately Correct (PAC) model. These networks can approximate an arbitrary function with an arbitrary precision. The learning algorithm can accommodate a fairly general worst-case noise model. The main improvement over previous work is that the running time of the algorithm grows only polynomially as the size of the target network increases (there is still an exponential dependence on the dimension of the input space, however). The main computational tool is an iterative “loading” algorithm which adds new hidden units to the hypothesis network sequentially. This avoids the difficult problem of optimizing the weights of all units simultaneously.