Automatic determination of network size for supervised learning

Determining the appropriate size of an artificial neural network for a given supervised learning problem at hand has usually been done through educated guess rather than automated means. The authors address this issue by formulating the problem as an automatic search in the space of functions which corresponds to a subclass of multilayer feedforward networks. Learning is thus a dynamic network construction process which involves adjusting the network weights as well as the topology. Adding new hidden units corresponds to extracting new features from the input attributes for reducing the residual classification errors. It is argued that the process takes advantage of the transfer effects of prior learning in the construction of large networks from smaller ones. Empirical results of some supervised learning experiments are also reported.<<ETX>>