Generalizing the PAC model: sample size bounds from metric dimension-based uniform convergence results

The probably approximately correct (PAC) model of learning from examples is generalized. The problem of learning functions from a set X into a set Y is considered, assuming only that the examples are generated by independent draws according to an unknown probability measure on X*Y. The learner's goal is to find a function in a given hypothesis space of functions from X into Y that on average give Y values that are close to those observed in random examples. The discrepancy is measured by a bounded real-valued loss function. The average loss is called the error of the hypothesis. A theorem on the uniform convergence of empirical error estimates to true error rates is given for certain hypothesis spaces, and it is shown how this implies learnability. A generalized notion of VC dimension that applies to classes of real-valued functions and a notion of capacity for classes of functions that map into a bounded metric space are given. These measures are used to bound the rate of convergence of empirical error estimates to true error rates, giving bounds on the sample size needed for learning using hypotheses in these classes. As an application, a distribution-independent uniform convergence result for certain classes of functions computed by feedforward neural nets is obtained. Distribution-specific uniform convergence results for classes of functions that are uniformly continuous on average are also obtained.<<ETX>>