Structure and weight optimization of neural network based on CPA-MLR and its application in naphtha dry point soft sensor

Structure and weight of neural networks play an important role in the predicting performance of neural networks. In order to overcome the main flaws of neural networks, such as under-fitting, over-fitting or wasting computational resource, correlation pruning algorithm combined with multiple linear regression (CPA-MLR) is proposed to optimize the structure and weight of neural networks. Firstly, an initial three-layer network with the maximum nodes of hidden layer is selected, and BP is employed to train it. Secondly, correlation analysis of the hidden-layer output is carried out to confirm the redundant hidden nodes. Thirdly, the redundant nodes will be deleted one by one, and a multiple linear regression model between the output of the hidden layer and the expected input of the output layer, which can be obtained through the inverse function of the output-layer node, is employed to obtain their optimal weight. Finally, the optimal structure of the neural networks, which is corresponding to the best predicting performance of the neural networks, is obtained. Further, a practical example, that is developing naphtha dry point soft sensor, is employed to illustrate the performance of CPA-MLR. The results show that the predicting performance of the soft sensor is improved and then decreased with deleting the redundant nodes, and the optimal predicting performance is obtained with the optimal hidden nodes.

[1]  Martin T. Hagan,et al.  Neural networks for control , 1999, Proceedings of the 1999 American Control Conference (Cat. No. 99CH36251).

[2]  Gustavo Deco,et al.  Two Strategies to Avoid Overfitting in Feedforward Networks , 1997, Neural Networks.

[3]  David S. Touretzky,et al.  Advances in neural information processing systems 2 , 1989 .

[4]  He Shudong,et al.  Survey of Architecture for Multilayer Feedforward Neural Networks , 1998 .

[5]  Y. Jiang,et al.  Comparison of two methods of adding jitter to artificial neural network training , 2004, CARS.

[6]  Lu Xiqun A Method of Dynamic Pruning the Hidden Layer Nodes in A Feedforward Neural Network , 1997 .

[7]  Qian Feng,et al.  Development of naphtha dry point soft sensor by adaptive partial least square regression , 2005 .

[8]  Mikko Lehtokangas Modelling with constructive backpropagation , 1999, Neural Networks.

[9]  Hussein A. Abbass,et al.  Stopping criteria for ensemble of evolutionary artificial neural networks , 2005, Appl. Soft Comput..

[10]  Zhu Youqin Hybrid pruning algorithm for artificial neural network training , 2005 .

[11]  Marino Uceda,et al.  A sensor-software based on artificial neural network for the optimization of olive oil elaboration process , 2008 .

[12]  L. T. Fan,et al.  Monitoring the process of curing of epoxy/graphite fiber composites with a recurrent neural network as a soft sensor , 1998 .

[13]  Mahdi Vasighi,et al.  Genetic Algorithms for architecture optimisation of Counter-Propagation Artificial Neural Networks , 2011 .

[14]  Lei Wang,et al.  Radial Basis Function Neural Networks-Based Modeling of the Membrane Separation Process: Hydrogen Recovery from Refinery Gases , 2006 .

[15]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[16]  Kurt Hornik,et al.  Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks , 1990, Neural Networks.

[17]  Yan Xue-feng,et al.  Hybrid artificial neural network based on BP-PLSR and its application in development of soft sensors , 2010 .

[18]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.

[19]  Ken-ichi Funahashi,et al.  On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.

[20]  Antonio Luchetta Automatic generation of the optimum threshold for parameter weighted pruning in multiple heterogeneous output neural networks , 2008, Neurocomputing.

[21]  Christopher MacLeod,et al.  Incremental growth in modular neural networks , 2009, Eng. Appl. Artif. Intell..

[22]  Daniel S. Yeung,et al.  Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure , 2006, Neurocomputing.

[23]  Daniel W. C. Ho,et al.  A new training and pruning algorithm based on node dependence and Jacobian rank deficiency , 2006, Neurocomputing.

[24]  Jennie Si,et al.  Subset-based training and pruning of sigmoid neural networks , 1999, Neural Networks.