Maximal Discrepancy vs. Rademacher Complexity for error estimation
暂无分享,去创建一个
Davide Anguita | Luca Oneto | Sandro Ridella | Alessandro Ghio | S. Ridella | L. Oneto | D. Anguita | A. Ghio
[1] Davide Anguita,et al. Maximal Discrepancy for Support Vector Machines , 2011, ESANN.
[2] John Langford,et al. Beating the hold-out: bounds for K-fold and progressive cross-validation , 1999, COLT '99.
[3] Davide Anguita,et al. Model selection for support vector machines: Advantages and disadvantages of the Machine Learning Theory , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).
[4] A. Isaksson,et al. Cross-validation and bootstrapping are unreliable in small sample classification , 2008, Pattern Recognit. Lett..
[5] Peter L. Bartlett,et al. Rademacher and Gaussian Complexities: Risk Bounds and Structural Results , 2003, J. Mach. Learn. Res..
[6] Dariu Gavrila,et al. An Experimental Study on Pedestrian Classification , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[7] T. Poggio,et al. General conditions for predictivity in learning theory , 2004, Nature.
[8] Michaël Aupetit. Nearly homogeneous multi-partitioning with a deterministic generator , 2009, Neurocomputing.
[9] Peter L. Bartlett,et al. Model Selection and Error Estimation , 2000, Machine Learning.
[10] Yoshua Bengio,et al. An empirical evaluation of deep architectures on problems with many factors of variation , 2007, ICML '07.
[11] Vladimir N. Vapnik,et al. The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.