Subspace learning using consensus on the grassmannian manifold

High-dimensional structure of data can be explored and task-specific representations can be obtained using manifold learning and low-dimensional embedding approaches. However, the uncertainties in data and the sensitivity of the algorithms to parameter settings, reduce the reliability of such representations, and make visualization and interpretation of data very challenging. A natural approach to combat challenges pertinent to data visualization is to use linearized embedding approaches. In this paper, we explore approaches to improve the reliability of linearized, subspace embedding frameworks by learning a plurality of subspaces and computing a geometric mean on the Grassmannian manifold. Using the proposed algorithm, we build variants of popular unsupervised and supervised graph embedding algorithms, and show that we can infer high-quality embeddings, thereby significantly improving their usability in visualization and classification.

[1]  Shuicheng Yan,et al.  Neighborhood preserving embedding , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[2]  Mikhail Belkin,et al.  Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering , 2001, NIPS.

[3]  Marcos Dipinto,et al.  Discriminant analysis , 2020, Predictive Analytics.

[4]  Liwei Wang,et al.  Further results on the subspace distance , 2007, Pattern Recognit..

[5]  Joe W. Harris,et al.  Algebraic Geometry: A First Course , 1995 .

[6]  J. A. López del Val,et al.  Principal Components Analysis , 2018, Applied Univariate, Bivariate, and Multivariate Statistics Using Python.

[7]  Feiping Nie,et al.  Trace Ratio Problem Revisited , 2009, IEEE Transactions on Neural Networks.

[8]  J. Tenenbaum,et al.  A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.

[9]  Jarkko Venna,et al.  Information Retrieval Perspective to Nonlinear Dimensionality Reduction for Data Visualization , 2010, J. Mach. Learn. Res..

[10]  Hwann-Tzong Chen,et al.  Local discriminant embedding and its variants , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[11]  Xiaofei He,et al.  Locality Preserving Projections , 2003, NIPS.

[12]  J. Kruskal Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis , 1964 .

[13]  David J. Kriegman,et al.  From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[14]  Lek-Heng Lim,et al.  Distance between subspaces of different dimensions , 2014 .

[15]  Stephen Lin,et al.  Graph Embedding and Extensions: A General Framework for Dimensionality Reduction , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  Liwei Wang,et al.  Subspace distance analysis with application to adaptive Bayesian algorithm for face recognition , 2006, Pattern Recognit..

[17]  Dong Xu,et al.  Trace Ratio vs. Ratio Trace for Dimensionality Reduction , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[18]  Amaury Lendasse,et al.  Variable Scaling for Time Series Prediction: Application to the ESTSP'07 and the NN3 Forecasting Competitions , 2007, 2007 International Joint Conference on Neural Networks.