Class-Probability Based Semi-Supervised Dimensionality Reduction for Hyperspectral Images

Hyperspectral images (HSI)are very useful due to the rich information they contained. However, for the same reason, it is also inconvenient to be analyzed due to its high dimension and also because it contains a lot of redundant information. Therefore, dimensionality reduction (DR)is often an indispensable step for the analysis of HSI. Due to the expensiveness of labeling samples, semi-supervised learning technique that performs DR with only a small amount of labeled samples, has attract more and more attention during the past several years. In this paper, we propose a novel method called class probability semi-supervised DR (CPSDR). Unlike previously semi-supervised DR methods, which only focus on a small number of labeled samples and depend on their local geometry information, our approach also pay much attention on unlabeled samples. Moreover, in our approach, not only local geometry information but also class structure information was exploited. We then combined these two information together to yield a more discriminative scatter matrix. We formulate our problem as an optimization problem and solve it by eigenvalue decomposition. The experimental results on Salinas and PaviaU hyperspectral data suggested that our algorithm achieved state-of-the-art performance.

[1]  Stephen Marshall,et al.  Effective Feature Extraction and Data Reduction in Remote Sensing Using Hyperspectral Imaging [Applications Corner] , 2014, IEEE Signal Processing Magazine.

[2]  Jieping Ye,et al.  Two-Dimensional Linear Discriminant Analysis , 2004, NIPS.

[3]  M. M. Ismail,et al.  Face Recognition using Principle Component Analysis ( PCA ) and Linear Discriminant Analysis ( LDA ) , 2012 .

[4]  S. T. Zuhori,et al.  Face recognition using Principle Component Analysis and Linear Discriminant Analysis , 2015, 2015 International Conference on Electrical Engineering and Information Communication Technology (ICEEICT).

[5]  Fang Liu,et al.  Semi-supervised double sparse graphs based discriminant analysis for dimensionality reduction , 2017, Pattern Recognit..

[6]  B. Moore Principal component analysis in linear systems: Controllability, observability, and model reduction , 1981 .

[7]  Hong Huang,et al.  Semi-Supervised Dimensionality Reduction of Hyperspectral Image Based on Sparse Multi-Manifold Learning , 2015 .

[8]  Tommy W. S. Chow,et al.  M-Isomap: Orthogonal Constrained Marginal Isomap for Nonlinear Dimensionality Reduction , 2013, IEEE Transactions on Cybernetics.

[9]  R. Tibshirani,et al.  Sparse Principal Component Analysis , 2006 .

[10]  Byron M. Yu,et al.  Dimensionality reduction for large-scale neural recordings , 2014, Nature Neuroscience.

[11]  John Wright,et al.  Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization , 2009, NIPS.

[12]  Saurabh Prasad,et al.  Spatially Constrained Semisupervised Local Angular Discriminant Analysis for Hyperspectral Images , 2018, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.

[13]  Wei Li,et al.  Dimensionality reduction using graph-embedded probability-based semi-supervised discriminant analysis , 2014, Neurocomputing.