Application of Improved Variable Learning Rate Back Propagation Neural Network in Energy Dispersion X-Ray Fluorescence Quantitative Analysis

Artificial Neural Network (ANN) can be applied to process data in analysis of Energy Dispersive X-Ray Fluorescence (EDXRF) due to its ability of nonlinear relationship processing. However, considering the conventional Back Propagation (BP) algorithm problems of slow convergence speed and easily getting into local dinky value, an improved Variable Learning Rate Back Propagation (VLBP) neural network is proposed, based on traditional VLBP neural network algorithm, the Lagrange interpolation polynomial is considered to calculate the additional parameter of variable learning rate. In the experiment part, we compare different models in the number of iterations to achieve error precision with the same batch of lead-zinc ore samples. Besides, a 30-time stability test of the models above is implemented. The Zn element concentration of a batch of lead-zinc ore samples is predicted by using the improved algorithm and the predicted values are compared with the chemical analysis values. The results show that the relative error between them is less than 5%. Additionally, 10 groups of samples whose characteristic peak counts exceed the training sample are selected with the purpose of generalization ability test. The relative error is comparatively higher, but still less than 5%, which refers to its certain generalization ability. The results show that the improved VLBP can quickly and accurately predict the concentration of target elements in EDXRF, it has dramatically improvement in convergence speed comparing with BP and VLBP, however, stochastic gradient descent and Adadelta display a more effective way.

[1]  Trishul M. Chilimbi,et al.  Project Adam: Building an Efficient and Scalable Deep Learning Training System , 2014, OSDI.

[2]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[3]  David W. Jacobs,et al.  Big Batch SGD: Automated Inference using Adaptive Batch Sizes , 2016, ArXiv.

[4]  Nitin Malik Artificial Neural Networks and their Applications , 2005, ArXiv.

[5]  Ming Jun Chen An Improved BP Neural Network Algorithm and its Application , 2014 .

[6]  Dan Alistarh,et al.  QSGD: Communication-Optimal Stochastic Gradient Descent, with Applications to Training Neural Networks , 2016, 1610.02132.

[7]  Vineeth N. Balasubramanian,et al.  ADINE: an adaptive momentum method for stochastic gradient descent , 2017, COMAD/CODS.

[8]  Jie Liu,et al.  Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting , 2015, IEEE Journal of Selected Topics in Signal Processing.

[9]  Aiguo Li,et al.  Internal elemental imaging by scanning X-ray fluorescence microtomography at the hard X-ray microprobe beamline of the SSRF: Preliminary experimental results , 2011 .

[10]  William P. Heath,et al.  A secant-based Nesterov method for convex functions , 2017, Optim. Lett..

[11]  K H Angeyo,et al.  Energy dispersive X-ray fluorescence and scattering assessment of soil quality via partial least squares and artificial neural networks analytical modeling approaches. , 2012, Talanta.

[12]  H. Erdoǧan,et al.  Energy dependence of photon-induced Kα and Kβ x-ray production cross-sections for some elements with 42≤Z≤68 in the energy range 38-80 keV , 2012 .

[13]  A. K. Rigler,et al.  Accelerating the convergence of the back-propagation method , 1988, Biological Cybernetics.

[14]  Frank Hutter,et al.  SGDR: Stochastic Gradient Descent with Warm Restarts , 2016, ICLR.

[15]  J. Ang Development of X-ray Fluorescence Spectrometry in the 30 Years , 2012 .

[16]  Liqiang Luo,et al.  Chemometrics and its applications to x-ray spectrometry , 2006 .

[17]  Stefano Soatto,et al.  Entropy-SGD: biasing gradient descent into wide valleys , 2016, ICLR.