Manifold-Based Learning and Synthesis

This paper proposes a new approach to analyze high-dimensional data set using low-dimensional manifold. This manifold-based approach provides a unified formulation for both learning from and synthesis back to the input space. The manifold learning method desires to solve two problems in many existing algorithms. The first problem is the local manifold distortion caused by the cost averaging of the global cost optimization during the manifold learning. The second problem results from the unit variance constraint generally used in those spectral embedding methods where global metric information is lost. For the out-of-sample data points, the proposed approach gives simple solutions to transverse between the input space and the feature space. In addition, this method can be used to estimate the underlying dimension and is robust to the number of neighbors. Experiments on both low-dimensional data and real image data are performed to illustrate the theory.

[1]  Hongbin Zha,et al.  Riemannian Manifold Learning , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Xuelong Li,et al.  Discriminant Locally Linear Embedding With High-Order Tensor Data , 2008, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[3]  Serge J. Belongie,et al.  Non-isometric manifold learning: analysis and an algorithm , 2007, ICML '07.

[4]  Yoshua Bengio,et al.  Nonlocal Estimation of Manifold Structure , 2006, Neural Computation.

[5]  Lawrence K. Saul,et al.  Analysis and extension of spectral methods for nonlinear dimensionality reduction , 2005, ICML.

[6]  Weimin Huang,et al.  A kernel autoassociator approach to pattern classification , 2005, IEEE Trans. Syst. Man Cybern. Part B.

[7]  H. Zha,et al.  Principal manifolds and nonlinear dimensionality reduction via tangent space alignment , 2004, SIAM J. Sci. Comput..

[8]  Kilian Q. Weinberger,et al.  Unsupervised Learning of Image Manifolds by Semidefinite Programming , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[9]  Z. Jane Wang,et al.  Joint segmentation and classification of time series using class-specific features , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[10]  Nicolas Le Roux,et al.  Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering , 2003, NIPS.

[11]  Lawrence K. Saul,et al.  Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifold , 2003, J. Mach. Learn. Res..

[12]  Lipo Wang,et al.  Data dimensionality reduction with application to simplifying RBF network structure and improving classification performance , 2003, IEEE Trans. Syst. Man Cybern. Part B.

[13]  L. Saul,et al.  Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.

[14]  J. Tenenbaum,et al.  A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.

[15]  T. Kohonen Self-Organizing Maps , 1995, Springer Series in Information Sciences.

[16]  Teuvo Kohonen,et al.  Self-organization and associative memory: 3rd edition , 1989 .

[17]  T. Hastie,et al.  Principal Curves , 2010 .

[18]  Johan Efberg,et al.  YALMIP : A toolbox for modeling and optimization in MATLAB , 2004 .

[19]  J. Lofberg,et al.  YALMIP : a toolbox for modeling and optimization in MATLAB , 2004, 2004 IEEE International Conference on Robotics and Automation (IEEE Cat. No.04CH37508).

[20]  B. Borchers CSDP, A C library for semidefinite programming , 1999 .

[21]  Sameer A. Nene,et al.  Columbia Object Image Library (COIL100) , 1996 .

[22]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory , 1988 .