Research on denoising sparse autoencoder

Autoencoder can learn the structure of data adaptively and represent data efficiently. These properties make autoencoder not only suit huge volume and variety of data well but also overcome expensive designing cost and poor generalization. Moreover, using autoencoder in deep learning to implement feature extraction could draw better classification accuracy. However, there exist poor robustness and overfitting problems when utilizing autoencoder. In order to extract useful features, meanwhile improve robustness and overcome overfitting, we studied denoising sparse autoencoder through adding corrupting operation and sparsity constraint to traditional autoencoder. The results suggest that different autoencoders mentioned in this paper have some close relation and the model we researched can extract interesting features which can reconstruct original data well. In addition, all results show a promising approach to utilizing the proposed autoencoder to build deep models.

[1]  Brendan J. Frey,et al.  k-Sparse Autoencoders , 2013, ICLR.

[2]  Zhu-Hong You,et al.  Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis , 2013, BMC Bioinformatics.

[3]  Xingming Sun,et al.  Structural Minimax Probability Machine , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[4]  Pierre Baldi,et al.  Autoencoders, Unsupervised Learning, and Deep Architectures , 2011, ICML Unsupervised and Transfer Learning.

[5]  B. Chandra,et al.  Adaptive Noise Schedule for Denoising Autoencoder , 2014, ICONIP.

[6]  Marc'Aurelio Ranzato,et al.  Sparse Feature Learning for Deep Belief Networks , 2007, NIPS.

[7]  Wang Xi-zhao,et al.  Architecture selection for networks trained with extreme learning machine using localized generalization error model , 2013 .

[8]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[9]  Zhang Xiong,et al.  Autoencoder-Based Collaborative Filtering , 2014, ICONIP.

[10]  Honglak Lee,et al.  Sparse deep belief net model for visual area V2 , 2007, NIPS.

[11]  Honglak Lee,et al.  An Analysis of Single-Layer Networks in Unsupervised Feature Learning , 2011, AISTATS.

[12]  Pascal Vincent,et al.  Semi Supervised Autoencoders: Better Focusing Model Capacity during Feature Extraction , 2013, ICONIP.

[13]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[14]  Geoffrey E. Hinton,et al.  Using very deep autoencoders for content-based image retrieval , 2011, ESANN.

[15]  Yu-Lin He,et al.  Fuzzy nonlinear regression analysis using a random weight network , 2016, Inf. Sci..

[16]  Yuhui Zheng,et al.  Image segmentation by generalized hierarchical fuzzy C-means algorithm , 2015, J. Intell. Fuzzy Syst..

[17]  Yu-Jin Zhang,et al.  A New Training Principle for Stacked Denoising Autoencoders , 2013, 2013 Seventh International Conference on Image and Graphics.

[18]  Marc'Aurelio Ranzato,et al.  Efficient Learning of Sparse Representations with an Energy-Based Model , 2006, NIPS.

[19]  Pierre Baldi,et al.  Complex-Valued Autoencoders , 2011, Neural Networks.

[20]  Liu Yuan,et al.  Predicting protein structural classes with autoencoder neural networks , 2013, 2013 25th Chinese Control and Decision Conference (CCDC).

[21]  Xizhao Wang,et al.  Upper integral network with extreme learning mechanism , 2011, Neurocomputing.

[22]  Xu Zhou,et al.  Effective algorithms of the Moore-Penrose inverse matrices for extreme learning machine , 2015, Intell. Data Anal..

[23]  Yoshua Bengio,et al.  Greedy Layer-Wise Training of Deep Networks , 2006, NIPS.

[24]  Wei Wang,et al.  Generalized Autoencoder: A Neural Network Framework for Dimensionality Reduction , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops.

[25]  A. Ng Feature selection, L1 vs. L2 regularization, and rotational invariance , 2004, Twenty-first international conference on Machine learning - ICML '04.

[26]  Hongjie Jia,et al.  An Adaptive Density Data Stream Clustering Algorithm , 2015, Cognitive Computation.

[27]  Yoshua Bengio,et al.  Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.

[28]  C. Eswaran,et al.  Reconstruction of handwritten digit images using autoencoder neural networks , 2008, 2008 Canadian Conference on Electrical and Computer Engineering.

[29]  Yi Wan,et al.  A novel efficient method for training sparse auto-encoders , 2013, 2013 6th International Congress on Image and Signal Processing (CISP).

[30]  Changchun Bao,et al.  Wiener filtering based speech enhancement with Weighted Denoising Auto-encoder and noise classification , 2014, Speech Commun..

[31]  Zhiqiang Gao,et al.  Text Window Denoising Autoencoder: Building Deep Architecture for Chinese Word Segmentation , 2013, NLPCC.

[32]  Pascal Vincent,et al.  Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..