MoM software for GPU hardware
暂无分享,去创建一个
The Method of Moments (MoM) technique is the backbone of all computational methods for the modeling and simulation of complex systems. With applications including fluid mechanics, electromagnetics, and fracture modeling, MoM is versatile and has laid the foundation for modern optimization methods. Modeling and simulation is absolutely necessary for the success of all complex engineering problems of today. Unfortunately, the size and complexity of some problems cause computations to be extremely time consuming. Even with optimized MoM methods, such as the Fast Multipole Method (FMM), some simulations can take days or weeks to complete with acceptable accuracy. Due to the limitations of traditional CPU hardware, research has been expanding to develop computation methods for Graphics Processing Unit (GPU) hardware. The GPU, which refers to the commodity off-the-shelf 3D graphics card, is specifically designed to be extremely fast at processing large graphics data sets (e.g., polygons and pixels). The computational power of today's commodity GPUs has exceeded that of PC-based CPUs. As the semiconductor fabrication technology advances, GPUs can use this additional hardware capability much more efficiently for computation than CPUs by increasing the number of computational “pipelines” (database, software networking modules and computational power). Additionally, many of the complex applications for MoM have computational patterns which are easily parallelizable and hence can be accelerated on commodity GPUs, achieving near realtime computation on ordinary PCs and laptops. This means that computational intensive modeling and simulation using GPUs is now becoming a realistic design tool. This paper presents the process and results of creating Method of Moments software that utilizes the parallelization benefits of GPU hardware. Written in a GPU language, CUDA, this software shows great potential for the future of complex modeling and simulation