Residual Adaptive Algorithm Applied in Intelligent Real-time Calculation of Current RMS Value During Resistance Spot Welding

To solve the large residual problems, which may occur during feed-forward neural network weight training, a comprehensive residual adaptive algorithm is proposed to give a better stability compared to standard Levenberg-Marquardt (L-M) algorithm and has less computational complexity than classical Newton method. The comparison with standard L-M algorithm checks the better performance of this algorithm. Then the well-trained neural network is embedded into a DSP controller to perform real-time calculation of current RMS value during resistance spot welding. Experimental result shows the validity of the residual adaptive algorithm and the feasibility of an intelligent current measuring method

[1]  Okyay Kaynak,et al.  An algorithm for fast convergence in training neural networks , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).

[2]  Xu Jin New Comprehensive Learning Method Based on LM-QuasiNewton Algorithm Applying for Feed-Forward Neural Network , 2004 .

[3]  John E. Dennis,et al.  An Adaptive Nonlinear Least-Squares Algorithm , 1977, TOMS.

[4]  Michael K. Weir,et al.  A method for self-determination of adaptive learning rates in back propagation , 1991, Neural Networks.

[5]  Ping Fang STUDIED ON THE CALCULATING METHOD OF THE CURRENT'S VIRTUAL VALUE DURING THE RESISTANCE SPOT WELDING PROCESS USED ANN IN REAL TIME , 2004 .

[6]  Liu Chengliang The Analytic Algorithm for Real-time Calculating the Dynamic Power Factor in AC Resistance Welding , 2006 .

[7]  Sehun Rhee,et al.  New technology for measuring dynamic resistance and estimating strength in resistance spot welding , 2000 .

[8]  A. Cullison RESISTANCE WELD CONTROLLER DELIVERS THE HEAT WHERE IT'S NEEDED , 1993 .

[9]  Mohammad Bagher Menhaj,et al.  Training feedforward networks with the Marquardt algorithm , 1994, IEEE Trans. Neural Networks.

[10]  Harry A. C. Eaton,et al.  Learning coefficient dependence on training set size , 1992, Neural Networks.

[11]  Gene H. Golub,et al.  Matrix computations , 1983 .

[12]  Roberto Battiti,et al.  Accelerated Backpropagation Learning: Two Optimization Methods , 1989, Complex Syst..

[13]  Kishan G. Mehrotra,et al.  An improved algorithm for neural network classification of imbalanced training sets , 1993, IEEE Trans. Neural Networks.

[14]  Robert A. Jacobs,et al.  Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.

[15]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[16]  Geoffrey E. Hinton,et al.  Learning representations by back-propagation errors, nature , 1986 .