Generalisation in Feedforward Networks

We discuss a model of consistent learning with an additional restriction on the probability distribution of training samples, the target concept and hypothesis class. We show that the model provides a significant improvement on the upper bounds of sample complexity, i.e. the minimal number of random training samples allowing a selection of the hypothesis with a predefined accuracy and confidence. Further, we show that the model has the potential for providing a finite sample complexity even in the case of infinite VC-dimension as well as for a sample complexity below VC-dimension. This is achieved by linking sample complexity to an "average" number of implement able dichotomies of a training sample rather than the maximal size of a shattered sample, i.e. VC-dimension.