Online Row Sampling

Finding a small spectral approximation for a tall $n \times d$ matrix $A$ is a fundamental numerical primitive. For a number of reasons, one often seeks an approximation whose rows are sampled from those of $A$. Row sampling improves interpretability, saves space when $A$ is sparse, and preserves row structure, which is especially important, for example, when $A$ represents a graph. However, correctly sampling rows from $A$ can be costly when the matrix is large and cannot be stored and processed in memory. Hence, a number of recent publications focus on row sampling in the streaming setting, using little more space than what is required to store the outputted approximation [KL13, KLM+14]. Inspired by a growing body of work on online algorithms for machine learning and data analysis, we extend this work to a more restrictive online setting: we read rows of $A$ one by one and immediately decide whether each row should be kept in the spectral approximation or discarded, without ever retracting these decisions. We present an extremely simple algorithm that approximates $A$ up to multiplicative error $\epsilon$ and additive error $\delta$ using $O(d \log d \log(\epsilon||A||_2/\delta)/\epsilon^2)$ online samples, with memory overhead proportional to the cost of storing the spectral approximation. We also present an algorithm that uses $O(d^2$) memory but only requires $O(d\log(\epsilon||A||_2/\delta)/\epsilon^2)$ samples, which we show is optimal. Our methods are clean and intuitive, allow for lower memory usage than prior work, and expose new theoretical properties of leverage score based matrix approximation.

[1]  Christos Boutsidis,et al.  Online Principal Components Analysis , 2015, SODA.

[2]  He Sun,et al.  Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time , 2017 .

[3]  Antoine Bordes,et al.  The Huller: A Simple and Efficient Online SVM , 2005, ECML.

[4]  Yin Tat Lee,et al.  Single Pass Spectral Sparsification in Dynamic Streams , 2014, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science.

[5]  Jakub W. Pachocki,et al.  A Framework for Analyzing Resparsification Algorithms , 2016, SODA.

[6]  David P. Woodruff,et al.  Low rank approximation and regression in input sparsity time , 2012, STOC '13.

[7]  Christos Boutsidis,et al.  Optimal CUR matrix decompositions , 2014, STOC.

[8]  Daniele Calandriello,et al.  Second-Order Kernel Online Convex Optimization with Adaptive Sketching , 2017, ICML.

[9]  Maxim Sviridenko,et al.  An Algorithm for Online K-Means Clustering , 2014, ALENEX.

[10]  Daniele Calandriello,et al.  Efficient Second-Order Online Kernel Learning with Adaptive Embedding , 2017, NIPS.

[11]  Michael W. Mahoney,et al.  Fast Randomized Kernel Ridge Regression with Statistical Guarantees , 2015, NIPS.

[12]  Shang-Hua Teng,et al.  Nearly-Linear Time Algorithms for Preconditioning and Solving Symmetric, Diagonally Dominant Linear Systems , 2006, SIAM J. Matrix Anal. Appl..

[13]  Nikhil Srivastava,et al.  Twice-ramanujan sparsifiers , 2008, STOC '09.

[14]  J. Sherman,et al.  Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix , 1950 .

[15]  Michael W. Mahoney,et al.  Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression , 2012, STOC '13.

[16]  Richard Peng,et al.  Uniform Sampling for Matrix Approximation , 2014, ITCS.

[17]  Huy L. Nguyen,et al.  OSNAP: Faster Numerical Linear Algebra Algorithms via Sparser Subspace Embeddings , 2012, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.

[18]  Michael B. Cohen,et al.  Dimensionality Reduction for k-Means Clustering and Low Rank Approximation , 2014, STOC.

[19]  Michael B. Cohen,et al.  Input Sparsity Time Low-rank Approximation via Ridge Leverage Score Sampling , 2015, SODA.

[20]  Nikhil Srivastava,et al.  Graph sparsification by effective resistances , 2008, SIAM J. Comput..

[21]  Koby Crammer,et al.  Online Passive-Aggressive Algorithms , 2003, J. Mach. Learn. Res..

[22]  Gary L. Miller,et al.  Iterative Row Sampling , 2012, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.

[23]  Yin Tat Lee,et al.  Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time , 2015, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science.

[24]  Michael B. Cohen,et al.  Ridge Leverage Scores for Low-Rank Approximation , 2015, ArXiv.

[25]  Gary L. Miller,et al.  Approaching Optimality for Solving SDD Linear Systems , 2010, 2010 IEEE 51st Annual Symposium on Foundations of Computer Science.

[26]  J. Tropp FREEDMAN'S INEQUALITY FOR MATRIX MARTINGALES , 2011, 1101.3039.

[27]  Shang-Hua Teng,et al.  Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems , 2003, STOC '04.

[28]  Jonathan A. Kelner,et al.  Spectral Sparsification in the Semi-streaming Setting , 2012, Theory of Computing Systems.