Low Rank Approximation Directed by Leverage Scores and Computed at Sub-linear Cost

Low rank approximation (LRA) of a matrix is a major subject of matrix and tensor computations and data mining and analysis. It is desired (and even imperative in applications to Big Data) to solve the problem at sub-linear cost, involving much fewer memory cells and arithmetic operations than an input matrix has entries, but this is impossible even for a small matrix family of our Appendix. Nevertheless we prove that this is possible with a high probability (whp) for random matrices admitting LRA. Namely we recall the known randomized algorithms that solve the LRA problem whp for any matrix admitting LRA by relying on the computation of the so called leverage scores. That computation has super-linear cost, but we simplify the solution algorithm and run it at sub-linear cost by trivializing the computation of leverage scores. Then we prove that whp the resulting algorithms output accurate LRA of a random input matrix admitting LRA.

[1]  CichockiAndrzej,et al.  Tensor Networks for Dimensionality Reduction and Large-scale Optimization , 2016 .

[2]  Christos Boutsidis,et al.  Optimal CUR matrix decompositions , 2014, STOC.

[3]  Zizhong Chen,et al.  Condition Numbers of Gaussian Random Matrices , 2005, SIAM J. Matrix Anal. Appl..

[4]  S. Goreinov,et al.  How to find a good submatrix , 2010 .

[5]  David P. Woodruff,et al.  Sublinear Time Low-Rank Approximation of Positive Semidefinite Matrices , 2017, 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS).

[6]  Victor Y. Pan,et al.  Refinement of Low Rank Approximation of a Matrix at Sub-linear Cost. , 2019 .

[7]  Andrzej Cichocki,et al.  Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions , 2016, Found. Trends Mach. Learn..

[8]  Alexander Osinsky,et al.  Pseudo-skeleton approximations with better accuracy estimates , 2018 .

[9]  Nathan Halko,et al.  Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions , 2009, SIAM Rev..

[10]  Shang-Hua Teng,et al.  Smoothed Analysis of the Condition Numbers and Growth Factors of Matrices , 2003, SIAM J. Matrix Anal. Appl..

[11]  Victor Pan,et al.  Numerically Safe Gaussian Elimination with No Pivoting , 2015, 1501.05385.

[12]  Victor Y. Pan,et al.  New Studies of Randomized Augmentation and Additive Preprocessing , 2014, 1412.5864.

[13]  Victor Y. Pan,et al.  Primitive and Cynical Low Rank Approximation, Preprocessing and Extensions , 2016 .

[14]  M. Rudelson,et al.  The smallest singular value of a random rectangular matrix , 2008, 0802.3956.

[15]  J. Demmel The Probability That a Numerical, Analysis Problem Is Difficult , 2013 .

[16]  Eugene E. Tyrtyshnikov,et al.  Quasioptimality of skeleton approximation of a matrix in the Chebyshev norm , 2011 .

[17]  A. Edelman Eigenvalues and condition numbers of random matrices , 1988 .

[18]  Victor Y. Pan,et al.  Random multipliers numerically stabilize Gaussian and block Gaussian elimination: Proofs and an extension to low-rank approximation ☆ , 2014, 1406.5802.

[19]  Alan Edelman,et al.  Tails of Condition Number Distributions , 2005, SIAM J. Matrix Anal. Appl..

[20]  Mark Rudelson,et al.  Sampling from large matrices: An approach through geometric functional analysis , 2005, JACM.

[21]  Michael W. Mahoney Boyd,et al.  Randomized Algorithms for Matrices and Data , 2010 .

[22]  S. Goreinov,et al.  The maximum-volume concept in approximation by low-rank matrices , 2001 .

[23]  Victor Y. Pan,et al.  Superfast Accurate Low Rank Approximation , 2016 .

[24]  Petros Drineas,et al.  CUR matrix decompositions for improved data analysis , 2009, Proceedings of the National Academy of Sciences.

[25]  S. Muthukrishnan,et al.  Relative-Error CUR Matrix Decompositions , 2007, SIAM J. Matrix Anal. Appl..

[26]  Santosh S. Vempala,et al.  An algorithmic theory of learning: Robust concepts and random projection , 1999, Machine Learning.

[27]  N. Kishore Kumar,et al.  Literature survey on low rank approximation of matrices , 2016, ArXiv.

[28]  Danna Zhou,et al.  d. , 1934, Microbial pathogenesis.

[29]  Severnyi Kavkaz Pseudo-Skeleton Approximations by Matrices of Maximal Volume , 2022 .

[30]  Victor Y. Pan,et al.  N A ] 5 J un 2 01 6 Fast Low-rank Approximation of a Matrix : Novel Insights , Novel Multipliers , and Extensions ∗ , 2016 .

[31]  Victor Y. Pan,et al.  Superfast CUR Approximation of Low Rank Matrices ∗ , 2017 .

[32]  Volkan Cevher,et al.  Practical Sketching Algorithms for Low-Rank Matrix Approximation , 2016, SIAM J. Matrix Anal. Appl..

[33]  N. Zamarashkin,et al.  New accuracy estimates for pseudoskeleton approximations of matrices , 2016 .

[34]  P. Massart,et al.  Adaptive estimation of a quadratic functional by model selection , 2000 .