Optimal Subsampling Algorithms for Big Data Generalized Linear Models

To fast approximate the maximum likelihood estimator with massive data, Wang et al. (JASA, 2017) proposed an Optimal Subsampling Method under the A-optimality Criterion (OSMAC) for in logistic regression. This paper extends the scope of the OSMAC framework to include generalized linear models with canonical link functions. The consistency and asymptotic normality of the estimator from a general subsampling algorithm are established, and optimal subsampling probabilities under the A- and L-optimality criteria are derived. Furthermore, using Frobenius norm matrix concentration inequality, finite sample properties of the subsample estimator based on optimal subsampling probabilities are derived. Since the optimal subsampling probabilities depend on the full data estimate, an adaptive two-step algorithm is developed. Asymptotic normality and optimality of the estimator from this adaptive algorithm are established. The proposed methods are illustrated and evaluated through numerical experiments on simulated and real datasets.

[1]  Petros Drineas,et al.  Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication , 2006, SIAM J. Comput..

[2]  R Core Team,et al.  R: A language and environment for statistical computing. , 2014 .

[3]  J. Sacks,et al.  Artic sea ice variability: Model sensitivities and a multidecadal simulation , 1994 .

[4]  Michael W. Mahoney Randomized Algorithms for Matrices and Data , 2011, Found. Trends Mach. Learn..

[5]  S. Muthukrishnan,et al.  Sampling algorithms for l2 regression and applications , 2006, SODA '06.

[6]  Shifeng Xiong,et al.  Some results on the convergence of conditional distributions , 2008 .

[7]  M. Silvapulle On the Existence of Maximum Likelihood Estimators for the Binomial Response Models , 1981 .

[8]  L. Brown Fundamentals of statistical exponential families: with applications in statistical decision theory , 1986 .

[9]  David P. Woodruff,et al.  Fast approximation of matrix coherence and statistical leverage , 2011, ICML.

[10]  Anthony C. Atkinson,et al.  Optimum Experimental Designs, with SAS , 2007 .

[11]  Jinzhu Jia,et al.  Elastic-net Regularized High-dimensional Negative Binomial Regression: Consistency and Weak Signals Detection , 2017, 1712.03412.

[12]  P. Deb,et al.  Demand for Medical Care by the Elderly: A Finite Mixture Approach , 1997 .

[13]  Jean-Michel Loubes,et al.  Oracle Inequalities for a Group Lasso Procedure Applied to Generalized Linear Models in High Dimension , 2013, IEEE Transactions on Information Theory.

[14]  T. Ferguson A Course in Large Sample Theory , 1996 .

[15]  M. H. Hansen,et al.  On the Theory of Sampling from Finite Populations , 1943 .