Targeted Cross-Validation

In many applications, we have access to the complete dataset but are only interested in the prediction of a particular region of predictor variables. A standard approach is to find the globally best modeling method from a set of candidate methods. However, it is perhaps rare in reality that one candidate method is uniformly better than the others. A natural approach for this scenario is to apply a weighted L2 loss in performance assessment to reflect the region-specific interest. We propose a targeted cross-validation (TCV) to select models or procedures based on a general weighted L2 loss. We show that the TCV is consistent in selecting the best performing candidate under the weighted L2 loss. Experimental studies are used to demonstrate the use of TCV and its potential advantage over the global CV or the approach of using only local data for modeling a local region. Previous investigations on CV have relied on the condition that when the sample size is large enough, the ranking of two candidates 1 ar X iv :2 10 9. 06 94 9v 1 [ st at .M L ] 1 4 Se p 20 21 stays the same. However, in many applications with the setup of changing data-generating processes or highly adaptive modeling methods, the relative performance of the methods is not static as the sample size varies. Even with a fixed data-generating process, it is possible that the ranking of two methods switches infinitely many times. In this work, we broaden the concept of the selection consistency by allowing the best candidate to switch as the sample size varies, and then establish the consistency of the TCV. This flexible framework can be applied to high-dimensional and complex machine learning scenarios where the relative performances of modeling procedures are dynamic.

[1]  Y. Baraud Estimator selection with respect to Hellinger-type risks , 2009, 0905.1486.

[2]  W. Wong On the Consistency of Cross-Validation in Kernel Nonparametric Regression , 1983 .

[3]  Sylvain Arlot,et al.  A survey of cross-validation procedures for model selection , 2009, 0907.4728.

[4]  David M. Allen,et al.  The Relationship Between Variable Selection and Data Agumentation and a Method for Prediction , 1974 .

[5]  Ker-Chau Li Consistency for Cross-Validated Nearest Neighbor Estimates in Nonparametric Regression , 1984 .

[6]  P. Burman A comparative study of ordinary cross-validation, v-fold cross-validation and the repeated learning-testing methods , 1989 .

[7]  Yuhong Yang CONSISTENCY OF CROSS VALIDATION FOR COMPARING REGRESSION PROCEDURES , 2007, 0803.2963.

[8]  Jianqing Fan,et al.  Nonconcave penalized likelihood with a diverging number of parameters , 2004, math/0406466.

[9]  J. Shao Linear Model Selection by Cross-validation , 1993 .

[10]  C. J. Stone,et al.  Optimal Global Rates of Convergence for Nonparametric Regression , 1982 .

[11]  Sylvain Arlot,et al.  Segmentation of the mean of heteroscedastic data via cross-validation , 2009, Stat. Comput..

[12]  Yuhong Yang,et al.  Cross-validation for selecting a model selection procedure , 2015 .

[13]  Yuhong Yang LOCALIZED MODEL SELECTION FOR REGRESSION , 2008, Econometric Theory.

[14]  M. Stone,et al.  Cross‐Validatory Choice and Assessment of Statistical Predictions , 1976 .

[15]  Yuhong Yang COMPARING LEARNING METHODS FOR CLASSIFICATION , 2006 .

[16]  Yi Yu,et al.  The restricted consistency property of leave-nv-out cross-validation for high-dimensional variable selection , 2018 .

[17]  Ker-Chau Li,et al.  Asymptotic Optimality for $C_p, C_L$, Cross-Validation and Generalized Cross-Validation: Discrete Index Set , 1987 .

[18]  Cun-Hui Zhang Nearly unbiased variable selection under minimax concave penalty , 2010, 1002.4734.

[19]  P. Speckman Spline Smoothing and Optimal Rates of Convergence in Nonparametric Regression Models , 1985 .

[20]  Jie Ding,et al.  Model Selection Techniques: An Overview , 2018, IEEE Signal Processing Magazine.

[21]  J. Shao AN ASYMPTOTIC THEORY FOR LINEAR MODEL SELECTION , 1997 .

[22]  Peter Craven,et al.  Smoothing noisy data with spline functions , 1978 .

[23]  D. Rubinfeld,et al.  Hedonic housing prices and the demand for clean air , 1978 .

[24]  R. Dennis Cook,et al.  Cross-Validation of Regression Models , 1984 .

[25]  Arkadi Nemirovski,et al.  Topics in Non-Parametric Statistics , 2000 .

[26]  Jeffrey S. Racine,et al.  Consistent cross-validatory model-selection for dependent data: hv-block cross-validation , 2000 .

[27]  Matthieu Lerasle,et al.  Choice of V for V-Fold Cross-Validation in Least-Squares Density Estimation , 2012, J. Mach. Learn. Res..

[28]  Jing Lei,et al.  Cross-Validation With Confidence , 2017, Journal of the American Statistical Association.

[29]  Matthieu Lerasle,et al.  Aggregated Hold-Out , 2021, J. Mach. Learn. Res..

[30]  Sylvie Huet,et al.  Estimator selection in the Gaussian setting , 2010, 1007.2096.

[31]  Seymour Geisser,et al.  The Predictive Sample Reuse Method with Applications , 1975 .

[32]  E. Nadaraya On Estimating Regression , 1964 .

[33]  Ping Zhang Model Selection Via Multifold Cross Validation , 1993 .

[34]  Wei-Yin Loh,et al.  Classification and regression trees , 2011, WIREs Data Mining Knowl. Discov..

[35]  Alain Celisse,et al.  Optimal cross-validation in density estimation with the $L^{2}$-loss , 2008, 0811.0802.