Improving generalization of a well trained network

Feedforward neural networks trained with a small set of noisy samples are prone to overtraining and poor generalization. On the other hand, a very small network could not be trained at all because it would be biased by its own architecture. Thus, it is an old problem to ascertain that a well trained network would also deliver good generalization. Theoretical results give bounds on generalization error, but with worst case estimations which is of less practical use. In practice cross-validation is used to estimate generalization. We propose a method to construct network so as to ascertain good generalization, even after sufficient training. Simulations show very good results in support of our algorithm. Some theoretical aspects are discussed.