How to Find Suitable Parametric Models using Genetic Algorithms. Application to Feedforward Neural Networks
暂无分享,去创建一个
Most of nonlinear models based on polynoms, wavelets or neural networks, have the universal approximation ability, [Barron, 1993]. This ability allows the nonlinear models to outperform linear models as soon as the problem includes nonlinear correlations between variables. This can be a strong advantage but this feature, plus the infinite variety of model structure, entail a danger named overfitting. Whatever is the problem you attempt to resolve using nonlinear parametric model (classification, regression, control…), in general, you have a certain amount of data (we denote these data learning base) that you use for the parameters estimation. What you want is a model which gives good performances on a set of novel data (named test set). If you observe significantly worse results on the the test set, the generalization ability of this model for this specific problem is poor and the model overfits the learning set. Estimating the parameters of a model having a lot of degrees of freedom (in general too many free parameters) for modeling not enough noisy data can yield an underestimation of the noise variance and overfitting of the data.
[1] Marie Cottrell,et al. Neural modeling for time series: A statistical stepwise method for weight elimination , 1995, IEEE Trans. Neural Networks.
[2] Andreas S. Weigend,et al. The Future of Time Series: Learning and Understanding , 1993 .
[3] Andrew R. Barron,et al. Universal approximation bounds for superpositions of a sigmoidal function , 1993, IEEE Trans. Inf. Theory.