A Swiss Army Infinitesimal Jackknife

The error or variability of machine learning algorithms is often assessed by repeatedly refitting a model with different weighted versions of the observed data. The ubiquitous tools of cross-validation (CV) and the bootstrap are examples of this technique. These methods are powerful in large part due to their model agnosticism but can be slow to run on modern, large data sets due to the need to repeatedly re-fit the model. In this work, we use a linear approximation to the dependence of the fitting procedure on the weights, producing results that can be faster than repeated re-fitting by an order of magnitude. This linear approximation is sometimes known as the “infinitesimal jackknife” in the statistics literature, where it is mostly used as a theoretical tool to prove asymptotic results. We provide explicit finite-sample error bounds for the infinitesimal jackknife in terms of a small number of simple, verifiable assumptions. Our results apply whether the weights and data are stochastic or deterministic, and so can be used as a tool for proving the accuracy of the infinitesimal jackknife on a wide variety of problems. As a corollary, we state mild regularity conditions under which our approximation consistently estimates true leave k-out cross-validation for any fixed k. These theoretical results, together with modern automatic differentiation software, support the application of the infinitesimal jackknife to a wide variety of practical problems in machine learning, providing a “Swiss Army infinitesimal jackknife.” We demonstrate the accuracy of our methods on a range of simulated and real datasets.

[1]  B. Efron The jackknife, the bootstrap, and other resampling plans , 1987 .

[2]  Stephen J. Wright,et al.  Numerical Optimization , 2018, Fundamental Statistical Inference.

[3]  Naman Agarwal,et al.  Second Order Stochastic Optimization in Linear Time , 2016, ArXiv.

[4]  James A. Reeds,et al.  Jackknifing Maximum Likelihood Estimates , 1978 .

[5]  Vahid Tarokh,et al.  On Optimal Generalizability in Parametric Learning , 2017, NIPS.

[6]  Trevor J. Hastie,et al.  Confidence intervals for random forests: the jackknife and the infinitesimal jackknife , 2013, J. Mach. Learn. Res..

[7]  Kamiar Rahnama Rad,et al.  A scalable estimate of the extra-sample prediction error via approximate leave-one-out , 2018, 1801.10243.

[8]  R. Keener Theoretical Statistics: Topics for a Core Course , 2010 .

[9]  R. Adamczak A tail inequality for suprema of unbounded empirical processes with applications to Markov chains , 2007, 0709.3110.

[10]  B. R. Clarke Uniqueness and Fréchet differentiability of functional solutions to maximum likelihood type equations , 1983 .

[11]  Jun Shao,et al.  Differentiability of Statistical Functionals and Consistency of the Jackknife , 1993 .

[12]  R. V. Mises On the Asymptotic Distribution of Differentiable Statistical Functions , 1947 .

[13]  John D. Storey,et al.  Significance analysis of time course microarray experiments. , 2005, Proceedings of the National Academy of Sciences of the United States of America.

[14]  Andreas Karlsson,et al.  Matrix Analysis for Statistics , 2007, Technometrics.

[15]  Percy Liang,et al.  Understanding Black-box Predictions via Influence Functions , 2017, ICML.

[16]  Yukiko Matsuoka,et al.  An Ultrasensitive Mechanism Regulates Influenza Virus-Induced Inflammation , 2015, PLoS pathogens.

[17]  Hongzhe Li,et al.  Clustering of time-course gene expression data using a mixed-effects model with B-splines , 2003, Bioinform..

[18]  M. Chavance [Jackknife and bootstrap]. , 1992, Revue d'epidemiologie et de sante publique.

[19]  Eric Jones,et al.  SciPy: Open Source Scientific Tools for Python , 2001 .

[20]  L. Fernholz von Mises Calculus For Statistical Functionals , 1983 .

[21]  Saharon Rosset,et al.  From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation , 2017, Journal of the American Statistical Association.

[22]  E. Mammen The Bootstrap and Edgeworth Expansion , 1997 .

[23]  Barak A. Pearlmutter,et al.  Automatic differentiation in machine learning: a survey , 2015, J. Mach. Learn. Res..