Massively parallel two-dimensional TLM algorithm on graphics processing units

Recent advances in computing technology has brought massively parallel computing power to desktop PCs. As multi-core processor technology becomes mature, a new front in parallel technology based on graphics processors has emerged. A massively parallel 2D-TLM algorithm for NVIDIA advanced graphics processors has been developed. The proposed parallel computing paradigm can be adopted straightforwardly to accelerate time-domain electromagnetic field modeling programs.

[1]  Wolfgang Banzhaf,et al.  Fast Genetic Programming and Artificial Developmental Systems on GPUs , 2007, 21st International Symposium on High Performance Computing Systems and Applications (HPCS'07).

[2]  M.M. Okoniewski,et al.  Acceleration of finite-difference time-domain (FDTD) using graphics processor units (GPU) , 2004, 2004 IEEE MTT-S International Microwave Symposium Digest (IEEE Cat. No.04CH37535).

[3]  Hiroaki Kobayashi,et al.  Radiative Heat Transfer Simulation Using Programmable Graphics Hardware , 2006, 5th IEEE/ACIS International Conference on Computer and Information Science and 1st IEEE/ACIS International Workshop on Component-Based Software Engineering,Software Architecture and Reuse (ICIS-COMSAR'06).

[4]  Arie E. Kaufman,et al.  GPU Cluster for High Performance Computing , 2004, Proceedings of the ACM/IEEE SC2004 Conference.

[5]  Wolfgang J. R. Hoefer,et al.  The Transmission-Line Matrix Method--Theory and Applications , 1985 .

[6]  Sadaf R. Alam,et al.  Analysis of a Computational Biology Simulation Technique on Emerging Processing Architectures , 2007, 2007 IEEE International Parallel and Distributed Processing Symposium.

[7]  Martin Cadík,et al.  FFT and Convolution Performance in Image Filtering on GPU , 2006, Tenth International Conference on Information Visualisation (IV'06).

[8]  P. Johns A Symmetrical Condensed Node for the TLM Method , 1987 .

[9]  Zhongwen Luo,et al.  Artificial neural network computation on graphic process unit , 2005, Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005..

[10]  Harry Shum,et al.  Accelerating video decoding using GPU , 2003, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)..

[11]  May D. Wang,et al.  High speed processing of biomedical images using programmable GPU , 2004, 2004 International Conference on Image Processing, 2004. ICIP '04..

[12]  Lixu Gu,et al.  GPU-based Volume Rendering for Medical Image Visualization , 2005, 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference.

[13]  Harry Shum,et al.  Accelerate Video Decoding With Generic GPU , 2005, IEEE Trans. Circuits Syst. Video Technol..

[14]  Michael R. Macedonia,et al.  The GPU Enters Computing's Mainstream , 2003, Computer.