Online Learning of Eigenvectors

Computing the leading eigenvector of a symmetric real matrix is a fundamental primitive of numerical linear algebra with numerous applications. We consider a natural online extension of the leading eigenvector problem: a sequence of matrices is presented and the goal is to predict for each matrix a unit vector, with the overall goal of competing with the leading eigenvector of the cumulative matrix. Existing regret-minimization algorithms for this problem either require to compute an eigen decompostion every iteration, or suffer from a large dependency of the regret bound on the dimension. In both cases the algorithms are not practical for large scale applications. In this paper we present new algorithms that avoid both issues. On one hand they do not require any expensive matrix decompositions and on the other, they guarantee regret rates with a mild dependence on the dimension at most. In contrast to previous algorithms, our algorithms also admit implementations that enable to leverage sparsity in the data to further reduce computation. We extend our results to also handle non-symmetric matrices.

[1]  J. Sylvester XXXVII. On the relation between the minor determinants of linearly equivalent quadratic functions , 1851 .

[2]  Chandler Davis The rotation of eigenvectors by a perturbation , 1963 .

[3]  W. Kahan,et al.  The Rotation of Eigenvectors by a Perturbation. III , 1970 .

[4]  Henryk Wozniakowski,et al.  Estimating the Largest Eigenvalue by the Power and Lanczos Algorithms with a Random Start , 1992, SIAM J. Matrix Anal. Appl..

[5]  Leonid Khachiyan,et al.  A sublinear-time randomized approximation algorithm for matrix games , 1995, Oper. Res. Lett..

[6]  Santosh S. Vempala,et al.  Efficient algorithms for online decision problems , 2005, J. Comput. Syst. Sci..

[7]  Martin Zinkevich,et al.  Online Convex Programming and Generalized Infinitesimal Gradient Ascent , 2003, ICML.

[8]  Gunnar Rätsch,et al.  Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection , 2004, J. Mach. Learn. Res..

[9]  Manfred K. Warmuth,et al.  Randomized PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension , 2006, NIPS.

[10]  Gábor Lugosi,et al.  Prediction, learning, and games , 2006 .

[11]  Sanjeev Arora,et al.  A combinatorial, primal-dual approach to semidefinite programs , 2007, STOC '07.

[12]  Manfred K. Warmuth,et al.  Corrigendum to "Learning rotations with little regret" September 7, 2010 , 2010 .

[13]  David P. Woodruff,et al.  Sublinear Optimization for Machine Learning , 2010, 2010 IEEE 51st Annual Symposium on Foundations of Computer Science.

[14]  Elad Hazan,et al.  Approximating Semidefinite Programs in Sublinear Time , 2011, NIPS.

[15]  Manfred K. Warmuth,et al.  Online variance minimization , 2011, Machine Learning.

[16]  Joel A. Tropp,et al.  User-Friendly Tail Bounds for Sums of Random Matrices , 2010, Found. Comput. Math..

[17]  Elad Hazan,et al.  Projection-free Online Learning , 2012, ICML.

[18]  Jiazhong Nie,et al.  Online PCA with Optimal Regrets , 2013, ALT.

[19]  Sanjoy Dasgupta,et al.  The Fast Convergence of Incremental PCA , 2013, NIPS.

[20]  O. Shamir A Stochastic PCA Algorithm with an Exponential Convergence Rate. , 2014 .

[21]  Manfred K. Warmuth,et al.  Learning rotations with little regret , 2010, Machine Learning.