Fast learning for big data applications using parameterized multilayer perceptron

An innovative approach has been proposed for using MLP for handling Big data. There is high computational cost and time involved in using MLP for classification of Big data having large number of features. A parameterized multilayer perceptron (PMLP) has been proposed where the weight matrix has been parameterized using periodic functions. This ensures that the weight values are bounded which leads to inherent regularization. Memory requirements for storing the weight matrix is drastically reduced. This also leads to increase in classification accuracy associated with drastic reduction in computational time as compared to MLP when executed on large benchmark datasets. This is a promising technique for handling Big data.

[1]  Shengyuan Xu,et al.  Neural-Network-Based Decentralized Adaptive Output-Feedback Control for Large-Scale Stochastic Nonlinear Systems , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[2]  L. Bodri,et al.  Prediction of extreme precipitation using a neural network: application to summer flood occurence in Moravia , 2000 .

[3]  Pascal Vincent,et al.  Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..

[4]  Matthew D. Zeiler ADADELTA: An Adaptive Learning Rate Method , 2012, ArXiv.

[5]  Abdulhamit Subasi,et al.  Classification of EEG signals using neural network and logistic regression , 2005, Comput. Methods Programs Biomed..

[6]  Tara N. Sainath,et al.  Improving deep neural networks for LVCSR using rectified linear units and dropout , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[7]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[8]  Marion O. Adebiyi,et al.  Stock Price Prediction using Neural Network with Hybridized Market Indicators , 2012 .

[9]  Pascal Vincent,et al.  The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training , 2009, AISTATS.

[10]  Katsunari Shibata,et al.  Effect of number of hidden neurons on learning in large-scale layered neural networks , 2009, 2009 ICCAS-SICE.

[11]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[12]  Pascal Vincent,et al.  Contractive Auto-Encoders: Explicit Invariance During Feature Extraction , 2011, ICML.