Rademacher Processes and Bounding the Risk of Function Learning

We construct data dependent upper bounds on the risk in function learning problems. The bounds are based on local norms of the Rademacher process indexed by the underlying function class, and they do not require prior knowledge about the distribution of training examples or any specific properties of the function class. Using Talagrand’s type concentration inequalities for empirical and Rademacher processes, we show that the bounds hold with high probability that decreases exponentially fast when the sample size grows. In typical situations that are frequently encountered in the theory of function learning, the bounds give nearly optimal rate of convergence of the risk to zero.