TDN: Twice-Least-Square Double-Parallel Neural Networks

Extreme learning machines (ELMs) have been shown to have good performance in various generalization tasks. Recently, Deep Neural Networks (DNNs) have also been shown to be able to represent and capture higher-level abstractions to achieve even better generalization performance. However, when it comes to learning speed, the latter may take a rather long time to adjust and determine weights and biases when compared with the relatively fast learning speed of the former. Motivated by the relative merit of the ELM and deep neural networks, we have developed a novel Twice-least-square Double-parallel Neural Networks (TDN). In TDN, the weights that connect the hidden layers are determined by twice Least square methods and the weights and biases that connect the input and output layers are randomly generated. The output neurons in TDN are connected both to the hidden layer and to the input layer directly so that they do not only capture higher-level abstractions from the last hidden layer but also the information that is hidden directly in the input neurons. With these characteristics, TDN can be shown to be able to achieve very good generalization performance for different classification and regression problems.

[1]  Han-Xiong Li,et al.  Extreme learning machine based spatiotemporal modeling of lithium-ion battery thermal dynamics , 2015 .

[2]  Huchuan Lu,et al.  Saliency detection via extreme learning machine , 2016, Neurocomputing.

[3]  Guoren Wang,et al.  Uncertain Graph Classification Based on Extreme Learning Machine , 2014, Cognitive Computation.

[4]  Danwei Wang,et al.  Sparse Extreme Learning Machine for Classification , 2014, IEEE Transactions on Cybernetics.

[5]  Guoqiang Li,et al.  Fast learning network: a novel artificial neural network with a fast learning speed , 2013, Neural Computing and Applications.

[6]  Amaury Lendasse,et al.  OP-ELM: Optimally Pruned Extreme Learning Machine , 2010, IEEE Transactions on Neural Networks.

[7]  Jing J. Liang,et al.  Two-hidden-layer extreme learning machine for regression and classification , 2016, Neurocomputing.

[8]  Chee Kheong Siew,et al.  Extreme learning machine: Theory and applications , 2006, Neurocomputing.

[9]  Yuntao Qian,et al.  Collaborative work with linear classifier and extreme learning machine for fast text categorization , 2013, World Wide Web.

[10]  Narasimhan Sundararajan,et al.  A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks , 2006, IEEE Transactions on Neural Networks.

[11]  Guang-Bin Huang,et al.  Learning capability and storage capacity of two-hidden-layer feedforward networks , 2003, IEEE Trans. Neural Networks.

[12]  Tara N. Sainath,et al.  The shared views of four research groups ) , 2012 .

[13]  Fuzhen Zhuang,et al.  Learning deep representations via extreme learning machines , 2015, Neurocomputing.

[14]  Minho Lee,et al.  Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection , 2017, Neural Networks.

[15]  Zhongzhi Shi,et al.  Denoising Laplacian multi-layer extreme learning machine , 2016, Neurocomputing.

[16]  Shin'ichi Tamura,et al.  Capabilities of a four-layered feedforward neural network: four layers versus three , 1997, IEEE Trans. Neural Networks.

[17]  Yongchao Liu,et al.  Least Square Fast Learning Network for modeling the combustion efficiency of a 300WM coal-fired boiler , 2014, Neural Networks.

[18]  Alexandros Iosifidis,et al.  DropELM: Fast neural network regularization with Dropout and DropConnect , 2015, Neurocomputing.

[19]  Stefano Fusi,et al.  Why neurons mix: high dimensionality for higher cognition , 2016, Current Opinion in Neurobiology.

[20]  Ying-Ke Lei,et al.  Face recognition via Weighted Sparse Representation , 2013, J. Vis. Commun. Image Represent..

[21]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[22]  Tieniu Tan,et al.  Representative Vector Machines: A Unified Framework for Classical Classifiers , 2016, IEEE Transactions on Cybernetics.