Manifold learning, a promised land or work in progress?

Tasks of image clustering and classification often deal with data of very high dimensions. To alleviate the dimensionality curse, several methods, such as isomap, LLE and KPCA, have recently been proposed and applied to learn low-dimensional, non-linear embedded manifolds in high-dimensional spaces. Unfortunately, the scenarios in which these methods appear to be effective are very contrived. In this work, we empirically examine these methods on a realistic but not-so-difficult dataset. We discuss the promises and limitations of these dimension-reduction schemes.