Discriminative Auto-Encoder With Local and Global Graph Embedding

In order to exploit the potential intrinsic low-dimensional structure of the high-dimensional data from the manifold learning perspective, we propose a global graph embedding with globality-preserving property, which requires that samples should be mapped close to their low-dimensional class representation data distribution centers in the embedding space. Then we propose a novel local and global graph embedding auto-encoder(LGAE) to capture the geometric structure of data, its cost function have three terms, a reconstruction loss to reproduce the input data based on the learned representation, a local graph embedding regularization to enforce mapping the neighboring samples close together in the embedding space, a global embedding regularization to enforce mapping samples close to their low-dimensional class representation distribution centers. Thus in the learning process, our LGAE can map samples from same class close together in the embedding space, as well as reduce the scatter within-class and increase the margin between-class, it will also detect the local and global intrinsic geometric structure of data and discover the latent discriminant information in the embedding space. We build stacked LGAE for classification tasks and conduct comprehensive experiments on several benchmark datasets, the results confirm that our proposed framework can learn discriminative representation, speed up the network convergence process, and significantly improve the classification performance.

[1]  Yongzhao Zhan,et al.  Sparsity and Geometry Preserving Graph Embedding for Dimensionality Reduction , 2018, IEEE Access.

[2]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Hongxun Yao,et al.  Auto-encoder based dimensionality reduction , 2016, Neurocomputing.

[4]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[5]  Richa Singh,et al.  Supervised COSMOS Autoencoder: Learning Beyond the Euclidean Loss! , 2018, ArXiv.

[6]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Zhang Yi,et al.  Discriminative globality and locality preserving graph embedding for dimensionality reduction , 2020, Expert Syst. Appl..

[9]  Richa Singh,et al.  Group sparse autoencoder , 2017, Image Vis. Comput..

[10]  Shiguang Shan,et al.  Representation Learning with Smooth Autoencoder , 2014, ACCV.

[11]  Xiaofei He,et al.  Locality Preserving Projections , 2003, NIPS.

[12]  Lin Sun,et al.  Laplacian Auto-Encoders: An explicit learning of nonlinear data manifold , 2015, Neurocomputing.

[13]  Junwei Han,et al.  Learning Compact and Discriminative Stacked Autoencoder for Hyperspectral Image Classification , 2019, IEEE Transactions on Geoscience and Remote Sensing.

[14]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[15]  Richa Singh,et al.  Class representative autoencoder for low resolution multi-spectral gender classification , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[16]  Pascal Vincent,et al.  Higher Order Contractive Auto-Encoder , 2011, ECML/PKDD.

[17]  Margaret Lech,et al.  Evaluating deep learning architectures for Speech Emotion Recognition , 2017, Neural Networks.

[18]  Jian Sun,et al.  Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Thamer Alhussain,et al.  Speech Emotion Recognition Using Deep Learning Techniques: A Review , 2019, IEEE Access.

[20]  Erik Cambria,et al.  Recent Trends in Deep Learning Based Natural Language Processing , 2017, IEEE Comput. Intell. Mag..

[21]  Pascal Vincent,et al.  Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..

[22]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[23]  Haixian Wang,et al.  Structured sparse linear graph embedding , 2012, Neural Networks.

[24]  Weifeng Liu,et al.  LMAE: A large margin Auto-Encoders for classification , 2017, Signal Process..

[25]  Palash Goyal,et al.  Graph Embedding Techniques, Applications, and Performance: A Survey , 2017, Knowl. Based Syst..

[26]  Sebastian Ruder,et al.  An overview of gradient descent optimization algorithms , 2016, Vestnik komp'iuternykh i informatsionnykh tekhnologii.

[27]  S T Roweis,et al.  Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.

[28]  Jiancheng Lv,et al.  Two-phase probabilistic collaborative representation-based classification , 2019, Expert Syst. Appl..

[29]  Yiyi Liao,et al.  Graph Regularized Auto-Encoders for Image Representation , 2017, IEEE Transactions on Image Processing.

[30]  Rong Wang,et al.  Fast and Orthogonal Locality Preserving Projections for Dimensionality Reduction , 2017, IEEE Transactions on Image Processing.

[31]  Yoshua Bengio,et al.  Greedy Layer-Wise Training of Deep Networks , 2006, NIPS.

[32]  Yann Dauphin,et al.  Language Modeling with Gated Convolutional Networks , 2016, ICML.

[33]  Pascal Vincent,et al.  Contractive Auto-Encoders: Explicit Invariance During Feature Extraction , 2011, ICML.

[34]  Christopher J. Merz,et al.  UCI Repository of Machine Learning Databases , 1996 .

[35]  Anastasios Tefas,et al.  Deep learning algorithms for discriminant autoencoding , 2017, Neurocomputing.

[36]  Mikhail Belkin,et al.  Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering , 2001, NIPS.

[37]  Quoc V. Le,et al.  On optimization methods for deep learning , 2011, ICML.