CUDA programmed GPUs are rapidly becoming a major choice in high performance computing and there are a growing number of applications which are being ported to the CUDA platform. However much less research has been carried out to evaluate the performance when CUDA is integrated with other parallel programming paradigms. We have developed a general purpose matrix multiplication algorithm and a Conjugate Gradient algorithm using CUDA and MPI. In this approach, MPI works as the data distributing mechanism between the GPU nodes and CUDA as the main computing engine. This enables the programmer to connect GPU nodes via high speed Ethernet without special technologies and also it helps the programmer to see the separate GPU nodes as they are and execute different components of a program in several GPU nodes.
[1]
Jack J. Dongarra,et al.
Towards dense linear algebra for hybrid GPU accelerated manycore systems
,
2009,
Parallel Comput..
[2]
Message P Forum,et al.
MPI: A Message-Passing Interface Standard
,
1994
.
[3]
A. Richardson.
Utilisation of the GPU architecture for HPC
,
2008
.
[4]
Sotiris Ioannidis,et al.
Gnort: High Performance Network Intrusion Detection Using Graphics Processors
,
2008,
RAID.
[5]
P. J. Narayanan,et al.
Accelerating Large Graph Algorithms on the GPU Using CUDA
,
2007,
HiPC.