Self-representation dimensionality reduction for multi-model classification

Feature selection remove the noisy/irrelevant samples and select the subset of representative features, in general, from the high-dimensional space of data has been a fatal significant technique in computer vision and machine learning. Afterwards, motivated by the interpretable ability of feature selection patterns, beside, and the successful use of low-rank constraint in static and sparse learning in the field of machine learning. We present a novel feature selection model with unsupervised learning by using low-rank regression on loss function, and a sparsity term plus K-means clustering method on regularization term during this article. In order to distinguish from those existing state-of-the-art attribute selection measures, the propose method have described as follows: (1) represent the every feature by other features (including itself) via utilize the corresponding loss function with a feature-level self-express way; (2) embed K-means to generate pseudo class label information for the attribute selection as an pseudo supervised method, because of the supervised learning usually have the better recognition results than unsupervised learning; (3) also use the low-rank constraint to feature selection which considers two aspects of information inherent in data. The low-rank constraint takes the correlation of response variables into account, while an 2, p-norm regularizer considers the correlation between feature vectors and their corresponding response variables. The extensive relevant results of experiment on three multi-model comparison data demonstrated that our new unsupervised feature selection pattern outperforms the related approaches.

[1]  Xiaowei Yang,et al.  A Low-Rank Approximation-Based Transductive Support Tensor Machine for Semisupervised Classification , 2015, IEEE Transactions on Image Processing.

[2]  Chengqi Zhang,et al.  Semi-parametric optimization for missing data imputation , 2007, Applied Intelligence.

[3]  Shichao Zhang,et al.  Robust Image Hashing With Ring Partition and Invariant Vector Distance , 2016, IEEE Transactions on Information Forensics and Security.

[4]  C. Ding,et al.  On the equivalent of low-rank linear regressions and linear discriminant analysis based regressions , 2013, KDD.

[5]  Zi Huang,et al.  Self-taught dimensionality reduction on the high-dimensional small-sized data , 2013, Pattern Recognit..

[6]  Zi Huang,et al.  Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence ℓ2,1-Norm Regularized Discriminative Feature Selection for Unsupervised Learning , 2022 .

[7]  Xindong Wu,et al.  Multi-Database Mining , 2003, IEEE Intell. Informatics Bull..

[8]  Shichao Zhang,et al.  Decision tree classifiers sensitive to heterogeneous costs , 2012, J. Syst. Softw..

[9]  Chengqi Zhang,et al.  Cost-sensitive classification with inadequate labeled data , 2012, Inf. Syst..

[10]  Domingo Giménez,et al.  Parameterized Schemes of Metaheuristics: Basic Ideas and Applications With Genetic Algorithms, Scatter Search, and GRASP , 2013, IEEE Transactions on Systems, Man, and Cybernetics: Systems.

[11]  Hui Xiong,et al.  SAIL: summation-based incremental learning for information-theoretic clustering , 2008, KDD.

[12]  Shichao Zhang,et al.  Self-representation nearest neighbor search for classification , 2016, Neurocomputing.

[13]  Xiaofeng Zhu,et al.  Graph self-representation method for unsupervised feature selection , 2017, Neurocomputing.

[14]  Dinggang Shen,et al.  Multi-modality Canonical Feature Selection for Alzheimer's Disease Diagnosis , 2014, MICCAI.

[15]  Zi Huang,et al.  Dimensionality reduction by Mixed Kernel Canonical Correlation Analysis , 2012, Pattern Recognition.

[16]  Zili Zhang,et al.  Missing Value Estimation for Mixed-Attribute Data Sets , 2011, IEEE Transactions on Knowledge and Data Engineering.

[17]  Xiaofeng Zhu,et al.  A novel matrix-similarity based loss function for joint regression and classification in AD diagnosis , 2014, NeuroImage.

[18]  Zi Huang,et al.  Sparse hashing for fast multimedia search , 2013, TOIS.

[19]  Lei Wang,et al.  On Similarity Preserving Feature Selection , 2013, IEEE Transactions on Knowledge and Data Engineering.

[20]  Shichao Zhang,et al.  Clustering-based Missing Value Imputation for Data Preprocessing , 2006, 2006 4th IEEE International Conference on Industrial Informatics.

[21]  Simon Lucey,et al.  Convolutional Sparse Coding for Trajectory Reconstruction , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[22]  Chengqi Zhang,et al.  Cost-Sensitive Imputing Missing Values with Ordering , 2007, AAAI.

[23]  Xuelong Li,et al.  Learning k for kNN Classification , 2017, ACM Trans. Intell. Syst. Technol..

[24]  Chris H. Q. Ding,et al.  Linear Discriminant Analysis: New Formulations and Overfit Analysis , 2011, AAAI.

[25]  Shichao Zhang,et al.  The Journal of Systems and Software , 2012 .

[26]  Yi Yang,et al.  Semisupervised Feature Selection via Spline Regression for Video Semantic Recognition , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[27]  Nicu Sebe,et al.  Exploiting the entire feature space with sparsity for automatic image annotation , 2011, ACM Multimedia.

[28]  Nicu Sebe,et al.  Web Image Annotation Via Subspace-Sparsity Collaborated Feature Selection , 2012, IEEE Transactions on Multimedia.

[29]  Xindong Wu,et al.  Efficient mining of both positive and negative association rules , 2004, TOIS.

[30]  Simon C. K. Shiu,et al.  Unsupervised feature selection by regularized self-representation , 2015, Pattern Recognit..

[31]  Xiaofeng Zhu,et al.  Video-to-Shot Tag Propagation by Graph Sparse Group Lasso , 2013, IEEE Transactions on Multimedia.

[32]  Xiaofeng Zhu,et al.  Zero-shot Image Categorization by Image Correlation Exploration , 2015, ICMR.

[33]  Manuel Graña,et al.  Evolutionary ELM wrapper feature selection for Alzheimer's disease CAD on anatomical brain MRI , 2014, Neurocomputing.

[34]  Nicu Sebe,et al.  Optimal graph learning with partial tags and multiple features for image and video annotation , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[35]  Junjie Wu,et al.  Scaling up cosine interesting pattern discovery: A depth-first method , 2014, Inf. Sci..