Nonlinear Dimensionality Reduction in Texture Classification: Is Manifold Learning Better Than PCA?
暂无分享,去创建一个
This paper presents a comparative analysis of algorithms belonging to manifold learning and linear dimensionality reduction. Firstly, classical texture image descriptors, namely Gray-Level Co-occurrence Matrix features, Haralick features, Histogram of Oriented Gradients features and Local Binary Patterns are combined to characterize and discriminate textures. For patches extracted from several texture images, a concatenation of the image descriptors is performed. Using four algorithms to wit Principal Component Analysis (PCA), Locally Linear Embedding (LLE), Isometric Feature Mapping (ISOMAP) and Laplacian Eigenmaps (Lap. Eig.), dimensionality reduction is achieved. The resulting learned features are then used to train four different classifiers: k-nearest neighbors, naive Bayes, decision tree and multilayer perceptron. Finally, the non-parametric statistical hypothesis test, Wilcoxon signed-rank test, is used to figure out whether or not manifold learning algorithms perform better than PCA. Computational experiments were conducted using the Outex and Salzburg datasets and the obtained results show that among twelve comparisons that were carried out, PCA presented better results than ISOMAP, LLE and Lap. Eig. in three comparisons. The remainder nine comparisons did not presented significant differences, indicating that in the presence of huge collections of texture images (bigger databases) the combination of image feature descriptors or patches extracted directly from raw image data and manifold learning techniques is potentially able to improve texture classification.