Nonlinear system modeling with deep neural networks and autoencoders algorithm

Deep learning techniques have been successfully used for pattern classification. These advantage methods are still not applied in nonlinear systems identification. In this paper, the neural model has deep architecture which is obtained by a random search method. The initial weights of this deep neural model is obtained from the denoising autoencoders model. We propose special unsupervised learning methods for this deep learning model with input data. The normal supervised learning is used to train the weights with the output data. The deep learning identification algorithms are validated with three benchmark examples.

[1]  Le Zhang,et al.  Ensemble deep learning for regression and time series forecasting , 2014, 2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL).

[2]  Kien A. Hua,et al.  Hybrid Manifold Embedding , 2014, IEEE Transactions on Neural Networks and Learning Systems.

[3]  Qing Song Robust Initialization of a Jordan Network With Recurrent Constrained Learning , 2011, IEEE Transactions on Neural Networks.

[4]  Geoffrey E. Hinton,et al.  Learning and relearning in Boltzmann machines , 1986 .

[5]  Yoshua Bengio,et al.  Algorithms for Hyper-Parameter Optimization , 2011, NIPS.

[6]  Xiaoou Li,et al.  Automated Nonlinear System Modeling with Multiple Fuzzy Neural Networks and Kernel Smoothing , 2010, Int. J. Neural Syst..

[7]  Yoshua Bengio,et al.  Random Search for Hyper-Parameter Optimization , 2012, J. Mach. Learn. Res..

[8]  Yoshua Bengio,et al.  Justifying and Generalizing Contrastive Divergence , 2009, Neural Computation.

[9]  Geoffrey E. Hinton,et al.  A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..

[10]  Amy Loutfi,et al.  A review of unsupervised feature learning and deep learning for time-series modeling , 2014, Pattern Recognit. Lett..

[11]  Yoshua Bengio,et al.  Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.

[12]  Jason Weston,et al.  A unified architecture for natural language processing: deep neural networks with multitask learning , 2008, ICML '08.

[13]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[14]  Peter Glöckner,et al.  Why Does Unsupervised Pre-training Help Deep Learning? , 2013 .

[15]  Pascal Vincent,et al.  The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training , 2009, AISTATS.

[16]  Kumpati S. Narendra,et al.  Optimization of Dynamical Systems Containing Neural Networks , 1991 .

[17]  Yoshua Bengio,et al.  Greedy Layer-Wise Training of Deep Networks , 2006, NIPS.

[18]  Shantanu Chakrabartty,et al.  Noise-Shaping Gradient Descent-Based Online Adaptation Algorithms for Digital Calibration of Analog Circuits , 2013, IEEE Transactions on Neural Networks and Learning Systems.

[19]  George E. P. Box,et al.  Time Series Analysis: Box/Time Series Analysis , 2008 .

[20]  Ian Osband,et al.  Deep Learning for Time Series Modeling CS 229 Final Project Report , 2012 .

[21]  Léon Personnaz,et al.  Neural-network construction and selection in nonlinear modeling , 2003, IEEE Trans. Neural Networks.

[22]  Juan Pardo,et al.  Time-Series Forecasting of Indoor Temperature Using Pre-trained Deep Neural Networks , 2013, ICANN.

[23]  Johan A. K. Suykens,et al.  Fixed-Size LS-SVM Applied to the Wiener-Hammerstein Benchmark , 2009 .