Broad learning system: Feature extraction based on K-means clustering algorithm

Broad Learning System [1] proposed recently demonstrates efficient and effective learning capability. This model is also proved to be suitable for incremental learning algorithms by taking the advantages of random vector flat neural networks. In this paper, a modified BLS structure based on the K-means feature extraction is developed. Compared with the original broad learning system, acceptable performance on more complicated data set, such as CIFAR-10, is achieved. Furthermore, it is proved that the proposed model in [1] is flexible and potential in various applications.

[1]  Kilian Q. Weinberger,et al.  Marginalized Denoising Autoencoders for Domain Adaptation , 2012, ICML.

[2]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[3]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[4]  C. L. Philip Chen,et al.  Data-intensive applications, challenges, techniques and technologies: A survey on Big Data , 2014, Inf. Sci..

[5]  Yann LeCun,et al.  What is the best multi-stage architecture for object recognition? , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[6]  Zhiwen Yu,et al.  Hybrid Adaptive Classifier Ensemble , 2015, IEEE Transactions on Cybernetics.

[7]  C. L. Philip Chen,et al.  A Fuzzy Restricted Boltzmann Machine: Novel Learning Algorithms Based on the Crisp Possibilistic Mean Value of Fuzzy Numbers , 2018, IEEE Transactions on Fuzzy Systems.

[8]  A. Krizhevsky Convolutional Deep Belief Networks on CIFAR-10 , 2010 .

[9]  Honglak Lee,et al.  Sparse deep belief net model for visual area V2 , 2007, NIPS.

[10]  Allan Pinkus,et al.  Multilayer Feedforward Networks with a Non-Polynomial Activation Function Can Approximate Any Function , 1991, Neural Networks.

[11]  Honglak Lee,et al.  An Analysis of Single-Layer Networks in Unsupervised Feature Learning , 2011, AISTATS.

[12]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[13]  Xuelong Li,et al.  Blind Image Quality Assessment via Deep Learning , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[14]  C. L. Philip Chen,et al.  Fuzzy Restricted Boltzmann Machine for the Enhancement of Deep Learning , 2015, IEEE Transactions on Fuzzy Systems.

[15]  Aapo Hyvärinen,et al.  Natural Image Statistics - A Probabilistic Approach to Early Computational Vision , 2009, Computational Imaging and Vision.

[16]  Geoffrey E. Hinton,et al.  Modeling pixel means and covariances using factorized third-order boltzmann machines , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[17]  Yihong Gong,et al.  Linear spatial pyramid matching using sparse coding for image classification , 2009, CVPR.

[18]  Yoh-Han Pao,et al.  Stochastic choice of basis functions in adaptive function approximation and the functional-link net , 1995, IEEE Trans. Neural Networks.

[19]  Maoguo Gong,et al.  Change Detection in Synthetic Aperture Radar Images Based on Deep Neural Networks , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[20]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[21]  Dejan J. Sobajic,et al.  Learning and generalization characteristics of the random vector Functional-link net , 1994, Neurocomputing.

[22]  Maoguo Gong,et al.  A Multiobjective Sparse Feature Learning Model for Deep Neural Networks , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[23]  Hareton K. N. Leung,et al.  Incremental Semi-Supervised Clustering Ensemble for High Dimensional Data Clustering , 2016, IEEE Transactions on Knowledge and Data Engineering.

[24]  C. L. Philip Chen,et al.  A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction , 1999, IEEE Trans. Syst. Man Cybern. Part B.

[25]  Geoffrey E. Hinton,et al.  Deep Boltzmann Machines , 2009, AISTATS.

[26]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[27]  Hareton K. N. Leung,et al.  Hybrid $k$ -Nearest Neighbor Classifier , 2016, IEEE Transactions on Cybernetics.

[28]  Gerhard Krieger,et al.  The atoms of vision: Cartesian or polar? , 1999 .

[29]  Andrew Y. Ng,et al.  Learning Feature Representations with K-Means , 2012, Neural Networks: Tricks of the Trade.

[30]  Guang-Bin Huang,et al.  Extreme Learning Machine for Multilayer Perceptron , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[31]  Y. Takefuji,et al.  Functional-link net computing: theory, system architecture, and functionalities , 1992, Computer.

[32]  Inderjit S. Dhillon,et al.  Concept Decompositions for Large Sparse Text Data Using Clustering , 2004, Machine Learning.

[33]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[34]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[35]  C. L. Philip Chen,et al.  Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture , 2018, IEEE Transactions on Neural Networks and Learning Systems.