A Statistical Perspective on Randomized Sketching for Ordinary Least-Squares
暂无分享,去创建一个
[1] Petros Drineas,et al. Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication , 2006, SIAM J. Comput..
[2] Robert H. Halstead,et al. Matrix Computations , 2011, Encyclopedia of Parallel Computing.
[3] S. Muthukrishnan,et al. Faster least squares approximation , 2007, Numerische Mathematik.
[4] Dean P. Foster,et al. Fast Ridge Regression with Randomized Principal Component Analysis and Gradient Descent , 2014, UAI.
[5] Sivan Toledo,et al. Blendenpik: Supercharging LAPACK's Least-Squares Solver , 2010, SIAM J. Sci. Comput..
[6] Carl D. Meyer,et al. Matrix Analysis and Applied Linear Algebra , 2000 .
[7] C. Jennison,et al. Robust Statistics: The Approach Based on Influence Functions , 1987 .
[8] Michael A. Saunders,et al. LSRN: A Parallel Iterative Solver for Strongly Over- or Underdetermined Systems , 2011, SIAM J. Sci. Comput..
[9] Michael W. Mahoney,et al. Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression , 2012, STOC '13.
[10] S. Agaian. Hadamard Matrices and Their Applications , 1985 .
[11] Dean P. Foster,et al. Faster Ridge Regression via the Subsampled Randomized Hadamard Transform , 2013, NIPS.
[12] M. Rudelson,et al. The smallest singular value of a random rectangular matrix , 2008, 0802.3956.
[13] David P. Woodruff,et al. Low rank approximation and regression in input sparsity time , 2012, STOC '13.
[14] Martin J. Wainwright,et al. Iterative Hessian Sketch: Fast and Accurate Solution Approximation for Constrained Least-Squares , 2014, J. Mach. Learn. Res..
[15] Michael W. Mahoney,et al. Statistical and Algorithmic Perspectives on Randomized Sketching for Ordinary Least-Squares , 2015, ICML.
[16] Alan M Zaslavsky,et al. Optimal sample allocation for design-consistent regression in a cancer services survey when design variables are known for aggregates. , 2008, Survey methodology.
[17] Petros Drineas,et al. CUR matrix decompositions for improved data analysis , 2009, Proceedings of the National Academy of Sciences.
[18] R. Royall. On finite population sampling theory under certain linear regression models , 1970 .
[19] E. Lehmann. Elements of large-sample theory , 1998 .
[20] Ping Ma,et al. A statistical perspective on algorithmic leveraging , 2013, J. Mach. Learn. Res..
[21] S. Chatterjee. Sensitivity analysis in linear regression , 1988 .
[22] David P. Woodruff,et al. Fast approximation of matrix coherence and statistical leverage , 2011, ICML.
[23] S. Chatterjee,et al. Influential Observations, High Leverage Points, and Outliers in Linear Regression , 1986 .
[24] R. Welsch,et al. The Hat Matrix in Regression and ANOVA , 1978 .
[25] Michael W. Mahoney. Randomized Algorithms for Matrices and Data , 2011, Found. Trends Mach. Learn..
[26] Huy L. Nguyen,et al. OSNAP: Faster Numerical Linear Algebra Algorithms via Sparser Subspace Embeddings , 2012, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.
[27] W. B. Johnson,et al. Extensions of Lipschitz mappings into Hilbert space , 1984 .
[28] W MahoneyMichael,et al. A statistical perspective on randomized sketching for ordinary least-squares , 2016 .
[29] Christos Boutsidis,et al. Improved Matrix Algorithms via the Subsampled Randomized Hadamard Transform , 2012, SIAM J. Matrix Anal. Appl..
[30] S. Muthukrishnan,et al. Sampling algorithms for l2 regression and applications , 2006, SODA '06.
[31] J. A. Díaz-García,et al. SENSITIVITY ANALYSIS IN LINEAR REGRESSION , 2022 .