Speeding Up the Training of Neural Networks with CUDA Technology

Training feed-forward neural networks can take a long time when there is a large amount of data to be used, even when training with more efficient algorithms like Levenberg-Marquardt. Parallel architectures have been a common solution in the area of high performance computing, since the technology used in current processors is reaching the limits of speed. An architecture that has been gaining popularity is the GPGPU (General-Purpose computing on Graphics Processing Units), which has received large investments from companies such as NVIDIA that introduced CUDA (Compute Unified Device Architecture) technology. This paper proposes a faster implementation of neural networks training with Levenberg-Marquardt algorithm using CUDA. The results obtained demonstrate that the whole training time can be almost 30 times shorter than code using Intel Math Library (MKL). A case study for classifying electrical company customers is presented.

[1]  Simon Haykin,et al.  Neural Networks and Learning Machines , 2010 .

[2]  Marley M. B. R. Vellasco,et al.  Irregularity detection on low tension electric installations by neural network ensembles , 2009, 2009 International Joint Conference on Neural Networks.

[3]  Mohammad Bagher Menhaj,et al.  Training feedforward networks with the Marquardt algorithm , 1994, IEEE Trans. Neural Networks.

[4]  P. J. Narayanan,et al.  High Performance Pattern Recognition on GPU , 2008 .

[5]  Keechul Jung,et al.  Neural Network Implementation Using CUDA and OpenMP , 2008, 2008 Digital Image Computing: Techniques and Applications.

[6]  Henry Wong,et al.  Analyzing CUDA workloads using a detailed GPU simulator , 2009, 2009 IEEE International Symposium on Performance Analysis of Systems and Software.