暂无分享,去创建一个
[1] F. T. Wright,et al. A Bound on Tail Probabilities for Quadratic Forms in Independent Random Variables , 1971 .
[2] W. B. Johnson,et al. Extensions of Lipschitz mappings into Hilbert space , 1984 .
[3] G. Pisier,et al. Non commutative Khintchine and Paley inequalities , 1991 .
[4] Moses Charikar,et al. Finding frequent items in data streams , 2002, Theor. Comput. Sci..
[5] Rudolf Ahlswede,et al. Strong converse for identification via quantum channels , 2000, IEEE Trans. Inf. Theory.
[6] Tamás Sarlós,et al. Improved Approximation Algorithms for Large Matrices via Random Projections , 2006, 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06).
[7] S. Muthukrishnan,et al. Sampling algorithms for l2 regression and applications , 2006, SODA '06.
[8] Petros Drineas,et al. Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication , 2006, SIAM J. Comput..
[9] Per-Gunnar Martinsson,et al. Randomized algorithms for the low-rank approximation of matrices , 2007, Proceedings of the National Academy of Sciences.
[10] Nikhil Srivastava,et al. Graph sparsification by effective resistances , 2008, SIAM J. Comput..
[11] Bernard Chazelle,et al. The Fast Johnson--Lindenstrauss Transform and Approximate Nearest Neighbors , 2009, SIAM J. Comput..
[12] David P. Woodruff,et al. Numerical linear algebra in the streaming model , 2009, STOC '09.
[13] Nikhil Srivastava,et al. Twice-ramanujan sparsifiers , 2008, STOC '09.
[14] Amin Saberi,et al. Subgraph sparsification and nearly optimal ultrasparsifiers , 2009, STOC '10.
[15] Kuang-Yao Lee,et al. Multiclass support vector classification via coding and regression , 2010, Neurocomputing.
[16] Anirban Dasgupta,et al. A sparse Johnson: Lindenstrauss transform , 2010, STOC '10.
[17] Avner Magen,et al. Low rank matrix-valued chernoff bounds and approximate matrix multiplication , 2010, SODA '11.
[18] Rachel Ward,et al. New and Improved Johnson-Lindenstrauss Embeddings via the Restricted Isometry Property , 2010, SIAM J. Math. Anal..
[19] Michael W. Mahoney. Randomized Algorithms for Matrices and Data , 2011, Found. Trends Mach. Learn..
[20] Joel A. Tropp,et al. Improved Analysis of the subsampled Randomized Hadamard Transform , 2010, Adv. Data Sci. Adapt. Anal..
[21] Daniel M. Kane,et al. Almost Optimal Explicit Johnson-Lindenstrauss Families , 2011, APPROX-RANDOM.
[22] Nir Ailon,et al. An almost optimal unrestricted fast Johnson-Lindenstrauss transform , 2010, SODA '11.
[23] Hideitsu Hino,et al. New Probabilistic Bounds on Eigenvalues and Eigenvectors of Random Kernel Matrices , 2011, UAI.
[24] Nathan Halko,et al. Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions , 2009, SIAM Rev..
[25] David P. Woodruff,et al. Low rank approximation and regression in input sparsity time , 2012, STOC '13.
[26] David P. Woodruff,et al. Fast approximation of matrix coherence and statistical leverage , 2011, ICML.
[27] Mikkel Thorup,et al. Tabulation-Based 5-Independent Hashing with Applications to Linear Probing and Second Moment Estimation , 2012, SIAM J. Comput..
[28] Holger Rauhut,et al. A Mathematical Introduction to Compressive Sensing , 2013, Applied and Numerical Harmonic Analysis.
[29] Huy L. Nguyen,et al. OSNAP: Faster Numerical Linear Algebra Algorithms via Sparser Subspace Embeddings , 2012, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.
[30] Gary L. Miller,et al. Iterative Row Sampling , 2012, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.
[31] Dean P. Foster,et al. Faster Ridge Regression via the Subsampled Randomized Hadamard Transform , 2013, NIPS.
[32] Michael W. Mahoney,et al. Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression , 2012, STOC '13.
[33] M. Rudelson,et al. Hanson-Wright inequality and sub-gaussian concentration , 2013 .
[34] Huy L. Nguyen,et al. Lower Bounds for Oblivious Subspace Embeddings , 2013, ICALP.
[35] David P. Woodruff,et al. Improved Distributed Principal Component Analysis , 2014, NIPS.
[36] Anastasios Kyrillidis,et al. Approximate matrix multiplication with application to linear embeddings , 2014, 2014 IEEE International Symposium on Information Theory.
[37] Daniel M. Kane,et al. Sparser Johnson-Lindenstrauss Transforms , 2010, JACM.
[38] Mary Wootters,et al. New constructions of RIP matrices with fast multiplication and fewer rows , 2012, SODA.
[39] J. Bourgain. An Improved Estimate in the Restricted Isometry Problem , 2014 .
[40] David P. Woodruff. Sketching as a Tool for Numerical Linear Algebra , 2014, Found. Trends Theor. Comput. Sci..
[41] Michael B. Cohen,et al. Dimensionality Reduction for k-Means Clustering and Low Rank Approximation , 2014, STOC.
[42] Joel A. Tropp,et al. An Introduction to Matrix Concentration Inequalities , 2015, Found. Trends Mach. Learn..
[43] 慧 廣瀬. A Mathematical Introduction to Compressive Sensing , 2015 .
[44] Richard Peng,et al. Uniform Sampling for Matrix Approximation , 2014, ITCS.
[45] Martin J. Wainwright,et al. Randomized sketches for kernels: Fast and optimal non-parametric regression , 2015, ArXiv.
[46] Yin Tat Lee,et al. Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time , 2015, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science.
[47] Christos Boutsidis,et al. Randomized Dimensionality Reduction for $k$ -Means Clustering , 2011, IEEE Transactions on Information Theory.
[48] Michael B. Cohen,et al. Nearly Tight Oblivious Subspace Embeddings by Trace Inequalities , 2016, SODA.
[49] Michael W. Mahoney,et al. Revisiting the Nystrom Method for Improved Large-scale Machine Learning , 2013, J. Mach. Learn. Res..
[50] Oded Regev,et al. The Restricted Isometry Property of Subsampled Fourier Matrices , 2015, SODA.