Information-Based Optimal Subdata Selection for Big Data Linear Regression

ABSTRACT Extraordinary amounts of data are being produced in many branches of science. Proven statistical methods are no longer applicable with extraordinary large datasets due to computational limitations. A critical step in big data analysis is data reduction. Existing investigations in the context of linear regression focus on subsampling-based methods. However, not only is this approach prone to sampling errors, it also leads to a covariance matrix of the estimators that is typically bounded from below by a term that is of the order of the inverse of the subdata size. We propose a novel approach, termed information-based optimal subdata selection (IBOSS). Compared to leading existing subdata methods, the IBOSS approach has the following advantages: (i) it is significantly faster; (ii) it is suitable for distributed parallel computing; (iii) the variances of the slope parameter estimators converge to 0 as the full data size increases even if the subdata size is fixed, that is, the convergence rate depends on the full data size; (iv) data analysis for IBOSS subdata is straightforward and the sampling distribution of an IBOSS estimator is easy to assess. Theoretical results and extensive simulations demonstrate that the IBOSS approach is superior to subsampling-based methods, sometimes by orders of magnitude. The advantages of the new approach are also illustrated through analysis of real data. Supplementary materials for this article are available online.

[1]  David Z. Goodson Mathematical Methods for Physical and Analytical Chemistry: Goodson/Mathematical , 2011 .

[2]  Terence Tao,et al.  The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.

[3]  Conrado Martínez Partial Quicksort , 2003 .

[4]  Trevor Hastie,et al.  Regularization Paths for Generalized Linear Models via Coordinate Descent. , 2010, Journal of statistical software.

[5]  E A Frongillo,et al.  Sources of fiber and fat in diets of US women aged 19 to 50: implications for nutrition education and policy. , 1992, American journal of public health.

[6]  R Core Team,et al.  R: A language and environment for statistical computing. , 2014 .

[7]  H. Zou,et al.  Regularization and variable selection via the elastic net , 2005 .

[8]  Peter Hall,et al.  On the relative stability of large order statistics , 1979, Mathematical Proceedings of the Cambridge Philosophical Society.

[9]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[10]  David Z. Goodson Mathematical Methods for Physical and Analytical Chemistry , 2011 .

[11]  Sadique Sheik,et al.  Reservoir computing compensates slow response of chemosensor arrays exposed to fast varying gas concentrations in continuous monitoring , 2015 .

[12]  Jianqing Fan,et al.  Distributed Estimation and Inference with Statistical Guarantees , 2015, 1509.05457.

[13]  W. Näther Optimum experimental designs , 1994 .

[14]  K. Nordström Convexity of the inverse and Moore–Penrose inverse , 2011 .

[15]  Jing Wu,et al.  Online Updating of Statistical Inference in the Big Data Setting , 2015, Technometrics.

[16]  David R. Musser,et al.  Introspective Sorting and Selection Algorithms , 1997, Softw. Pract. Exp..

[17]  David P. Woodruff,et al.  Fast approximation of matrix coherence and statistical leverage , 2011, ICML.

[18]  S. Muthukrishnan,et al.  Sampling algorithms for l2 regression and applications , 2006, SODA '06.

[19]  Ping Ma,et al.  A statistical perspective on algorithmic leveraging , 2013, J. Mach. Learn. Res..

[20]  Bjarne Stroustrup,et al.  C++ Programming Language , 1986, IEEE Softw..

[21]  Xiaoxiao Sun,et al.  Leveraging for big data regression , 2015 .

[22]  S. Muthukrishnan,et al.  Faster least squares approximation , 2007, Numerische Mathematik.

[23]  Jianqing Fan,et al.  Sure independence screening for ultrahigh dimensional feature space , 2006, math/0612857.

[24]  Melanie Keller Mathematical Methods For Physical And Analytical Chemistry , 2016 .

[25]  W Y Zhang,et al.  Discussion on `Sure independence screening for ultra-high dimensional feature space' by Fan, J and Lv, J. , 2008 .

[26]  J. Angus The Asymptotic Theory of Extreme Order Statistics , 1990 .

[27]  Rong Zhu,et al.  Optimal Subsampling for Large Sample Logistic Regression , 2017, Journal of the American Statistical Association.

[28]  Ruibin Xi,et al.  Aggregated estimating equation estimation , 2011 .

[29]  N. Meinshausen,et al.  LASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS FOR HIGH-DIMENSIONAL DATA , 2008, 0806.0145.