Dimensionality Reduction in Statistical Learning

Many statistical learning tasks deal with data which are presented in high-dimensional spaces, and the 'curse of dimensionality' phenomenon is often an obstacle to the use of many methods for solving these tasks. To avoid this phenomenon, various dimensionality reduction algorithms are used as the first key step in solving these tasks. The algorithms transform original high-dimensional data into lower dimensional representations in such a way that the initial task can be reduced to a lower dimensional one. The dimensionality reduction problems have varying formulations depending on their initial statistical learning tasks. A new geometrically motivated algorithm that solves various dimensionality reduction problems is presented.

[1]  Daniel D. Lee,et al.  Grassmann discriminant analysis: a unifying view on subspace-based learning , 2008, ICML '08.

[2]  Alexander P. Kuleshov,et al.  Tangent Bundle Manifold Learning via Grassmann&Stiefel Eigenmaps , 2012, ArXiv.

[3]  Bernhard Schölkopf,et al.  Nonlinear Component Analysis as a Kernel Eigenvalue Problem , 1998, Neural Computation.

[4]  Michel Verleysen,et al.  Quality assessment of dimensionality reduction: Rank-based criteria , 2009, Neurocomputing.

[5]  I. Jolliffe Principal Component Analysis , 2002 .

[6]  Heng Tao Shen,et al.  Principal Component Analysis , 2009, Encyclopedia of Biometrics.

[7]  Lawrence Cayton,et al.  Algorithms for manifold learning , 2005 .

[8]  R Hecht-Nielsen,et al.  Replicator neural networks for universal optimal source coding. , 1995, Science.

[9]  Daniel Freedman,et al.  Efficient Simplicial Reconstructions of Manifolds from Their Samples , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Pascal Vincent,et al.  Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives , 2012, ArXiv.

[11]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[12]  Hongyuan Zha,et al.  Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent Space Alignment , 2002, ArXiv.

[13]  Alexander P. Kuleshov,et al.  Manifold Learning: Generalization Ability and Tangent Proximity , 2013, Int. J. Softw. Informatics.

[14]  Alexander P. Kuleshov,et al.  Manifold Learning: Generalization Ability and Tangent Proximity , 2013, Int. J. Softw. Informatics.

[15]  Nicolas Le Roux,et al.  Learning Eigenfunctions Links Spectral Embedding and Kernel PCA , 2004, Neural Computation.

[16]  X. Huo,et al.  Electricity Price Curve Modeling and Forecasting by Manifold Learning , 2008, IEEE Transactions on Power Systems.

[17]  Larry A. Wasserman,et al.  Minimax Manifold Estimation , 2010, J. Mach. Learn. Res..

[18]  Robert Pless,et al.  A Survey of Manifold Learning for Images , 2009, IPSJ Trans. Comput. Vis. Appl..

[19]  X. Huo,et al.  A Survey of Manifold-Based Learning Methods , 2007 .

[20]  Mikhail Belkin,et al.  Laplacian Eigenmaps for Dimensionality Reduction and Data Representation , 2003, Neural Computation.

[21]  A. P. Kuleshov,et al.  Tangent bundle Manifold Learning for image analysis , 2013, Other Conferences.

[22]  Pascal Vincent,et al.  Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[23]  H. Zha,et al.  Principal manifolds and nonlinear dimensionality reduction via tangent space alignment , 2004, SIAM J. Sci. Comput..

[24]  Yunqian Ma,et al.  Manifold Learning Theory and Applications , 2011 .

[25]  S T Roweis,et al.  Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.