Maximal-discrepancy bounds for regularized classifiers

Regularized Classifiers such as SVM or RLS are among the most used and successful classifiers in machine learning. The theory and the empirical evaluation of the associate generalization bounds are of paramount importance; bounds based on the Maximal-Discrepancy approach proved quite effective. The paper shows an efficient, iterative procedure to evaluate Maximal-Discrepancy bounds for this kind of classifiers. Empirical results on UCI datasets show that this approach can attain tighter bounds to the run-time classification error.