On the number of training points needed for adequate training of feedforward neural networks
暂无分享,去创建一个
The authors address the problem of training neural networks to act as approximations of continuous mappings. In the case where the only representation of the mapping within the training process is through a finite set of training points, they show that in order for this set of points to provide an adequate representation of the mapping, it must contain a number of points which rises at least exponentially quickly with the dimension of the input space. Thus they also show that the time taken to train the networks will rise at least exponentially quickly with the dimension of the input. They conclude that if the only training algorithms available rely upon a finite training set, then the application of neural networks to the approximation problem is impractical whenever the dimension of the input is large. By extrapolating their experimental results, they estimate that 'large' in this respect means 'greater than ten'.<<ETX>>
[1] George Cybenko,et al. Approximation by superpositions of a sigmoidal function , 1992, Math. Control. Signals Syst..