Distributed estimation of principal eigenspaces.

Principal component analysis (PCA) is fundamental to statistical machine learning. It extracts latent principal factors that contribute to the most variation of the data. When data are stored across multiple machines, however, communication cost can prohibit the computation of PCA in a central location and distributed algorithms for PCA are thus needed. This paper proposes and studies a distributed PCA algorithm: each node machine computes the top K eigenvectors and transmits them to the central server; the central server then aggregates the information from all the node machines and conducts a PCA based on the aggregated information. We investigate the bias and variance for the resulting distributed estimator of the top K eigenvectors. In particular, we show that for distributions with symmetric innovation, the empirical top eigenspaces are unbiased and hence the distributed PCA is "unbiased". We derive the rate of convergence for distributed PCA estimators, which depends explicitly on the effective rank of covariance, eigen-gap, and the number of machines. We show that when the number of machines is not unreasonably large, the distributed PCA performs as well as the whole sample PCA, even without full access of whole data. The theoretical results are verified by an extensive simulation study. We also extend our analysis to the heterogeneous case where the population covariance matrices are different across local machines but share similar top eigen-structures.

[1]  W. Kahan,et al.  The Rotation of Eigenvectors by a Perturbation. III , 1970 .

[2]  G. Blanchard,et al.  Parallelizing Spectral Algorithms for Kernel Learning , 2016, 1610.07487.

[3]  V. Koltchinskii,et al.  Concentration inequalities and moment bounds for sample covariance operators , 2014, 1405.2468.

[4]  J. Marron,et al.  PCA CONSISTENCY IN HIGH DIMENSION, LOW SAMPLE SIZE CONTEXT , 2009, 0911.3827.

[5]  M. Reiß,et al.  Nonasymptotic upper bounds for the reconstruction error of PCA , 2016, The Annals of Statistics.

[6]  Nathan Halko,et al.  Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions , 2009, SIAM Rev..

[7]  Zhengyuan Xu,et al.  Perturbation analysis for subspace decomposition with applications in subspace-based algorithms , 2002, IEEE Trans. Signal Process..

[8]  Anna Scaglione,et al.  Distributed Principal Subspace Estimation in Wireless Sensor Networks , 2011, IEEE Journal of Selected Topics in Signal Processing.

[9]  Minge Xie,et al.  A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data , 2014 .

[10]  Roman Vershynin,et al.  Introduction to the non-asymptotic analysis of random matrices , 2010, Compressed Sensing.

[11]  Chandler Davis The rotation of eigenvectors by a perturbation , 1963 .

[12]  G. Stewart,et al.  Matrix Perturbation Theory , 1990 .

[13]  Volkan Cevher,et al.  Practical Sketching Algorithms for Low-Rank Matrix Approximation , 2016, SIAM J. Matrix Anal. Appl..

[14]  Vincent Q. Vu,et al.  MINIMAX SPARSE PRINCIPAL SUBSPACE ESTIMATION IN HIGH DIMENSIONS , 2012, 1211.0373.

[15]  Stanislav Minsker Sub-Gaussian estimators of the mean of a random matrix with heavy-tailed entries , 2016, The Annals of Statistics.

[16]  I. Johnstone,et al.  Augmented sparse principal component analysis for high dimensional data , 2012, 1202.1242.

[17]  Denis Bosq Stochastic Processes and Random Variables in Function Spaces , 2000 .

[18]  Vladimir Koltchinskii,et al.  Asymptotics and Concentration Bounds for Bilinear Forms of Spectral Projectors of Sample Covariance , 2014, 1408.4643.

[19]  Qiang Liu,et al.  Communication-efficient Sparse Regression , 2017, J. Mach. Learn. Res..

[20]  Ohad Shamir,et al.  Communication-efficient Algorithms for Distributed Stochastic Principal Component Analysis , 2017, ICML.

[21]  I. Johnstone,et al.  On Consistency and Sparsity for Principal Components Analysis in High Dimensions , 2009, Journal of the American Statistical Association.

[22]  I. Johnstone On the distribution of the largest eigenvalue in principal components analysis , 2001 .

[23]  Marc Moonen,et al.  Distributed adaptive estimation of covariance matrix eigenvectors in wireless sensor networks with application to distributed PCA , 2014, Signal Process..

[24]  Tengyao Wang,et al.  A useful variant of the Davis--Kahan theorem for statisticians , 2014, 1405.0680.

[25]  A. Onatski Asymptotics of the principal components estimator of large factor models with weakly influential factors , 2012 .

[26]  B. Nadler Finite sample approximation results for principal component analysis: a matrix perturbation approach , 2009, 0901.3245.

[27]  Ioannis D. Schizas,et al.  A Distributed Framework for Dimensionality Reduction and Denoising , 2015, IEEE Transactions on Signal Processing.

[28]  James Stephen Marron,et al.  High dimension low sample size asymptotics of robust PCA , 2015 .

[29]  Jianqing Fan,et al.  Robust Low-Rank Matrix Recovery , 2016 .

[30]  K. J. Utikal,et al.  Inference for Density Families Using Functional Principal Component Analysis , 2001 .

[31]  Weichung Wang,et al.  Integrating multiple random sketches for singular value decomposition , 2016, 1608.08285.

[32]  D. Paul ASYMPTOTICS OF SAMPLE EIGENSTRUCTURE FOR A LARGE DIMENSIONAL SPIKED COVARIANCE MODEL , 2007 .

[33]  Heng Tao Shen,et al.  Principal Component Analysis , 2009, Encyclopedia of Biometrics.

[34]  Xiaohan Wei,et al.  Estimation of the covariance structure of heavy-tailed distributions , 2017, NIPS.

[35]  Richard J. Vaccaro,et al.  A Second-Order Perturbation Expansion for the SVD , 1994 .

[36]  Jianqing Fan,et al.  Distributed Estimation and Inference with Statistical Guarantees , 2015, 1509.05457.

[37]  Dan Shen,et al.  Consistency of sparse PCA in High Dimension, Low Sample Size contexts , 2011, J. Multivar. Anal..

[38]  David P. Woodruff,et al.  Improved Distributed Principal Component Analysis , 2014, NIPS.

[39]  Santosh S. Vempala,et al.  Principal Component Analysis and Higher Correlations for Distributed Data , 2013, COLT.

[40]  N. Samatova,et al.  Principal Component Analysis for Dimension Reduction in Massive Distributed Data Sets ∗ , 2002 .

[41]  T. W. Anderson ASYMPTOTIC THEORY FOR PRINCIPAL COMPONENT ANALYSIS , 1963 .

[42]  David Gross,et al.  Recovering Low-Rank Matrices From Few Coefficients in Any Basis , 2009, IEEE Transactions on Information Theory.

[43]  Rongrong Wang,et al.  Singular Vector Perturbation Under Gaussian Noise , 2012, SIAM J. Matrix Anal. Appl..

[44]  Christos Boutsidis,et al.  Optimal principal component analysis in distributed and streaming models , 2015, STOC.

[45]  Hillol Kargupta,et al.  Distributed Clustering Using Collective Principal Component Analysis , 2001, Knowledge and Information Systems.

[46]  Noureddine El Karoui,et al.  Second order accurate distributed eigenvector computation for extremely large matrices , 2009, 0908.0137.

[47]  Ding-Xuan Zhou,et al.  Learning theory of distributed spectral algorithms , 2017 .

[48]  Greenwood,et al.  Statistics and Mathematics , 1946 .

[49]  Ben Adcock,et al.  Analyzing the structure of multidimensional compressed sensing problems through coherence , 2016, ArXiv.

[50]  S. Péché,et al.  Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices , 2004, math/0403022.

[51]  H. Hotelling Analysis of a complex of statistical variables into principal components. , 1933 .

[52]  Tosio Kato Perturbation theory for linear operators , 1966 .

[53]  T. Cai,et al.  Sparse PCA: Optimal rates and adaptive estimation , 2012, 1211.1309.

[54]  Jianqing Fan,et al.  Asymptotics of empirical eigenstructure for high dimensional spiked covariance. , 2017, Annals of statistics.