An important development in cluster computing is the availability of multiprocessor workstations. These are able to provide additional computational power to the cluster without increasing network overhead and allow multiparadigm parallelism, which we define to be the simultaneous application of both distributed and shared memory parallel processing techniques to a single problem. In this paper we compare execution times and speedup of parallel programs written in a pure message-passing paradigm with those that combine message passing and shared-memory primitives in the same application. We consider three basic applications that are common building blocks for many scientific and engineering problems: numerical integration, matrix multiplication and Jacobi iteration. Our results indicate that the added complexity of combining shared- and distributed-memory programming methods in the same program does not contribute sufficiently to performance to justify the added programming complexity.
[1]
Jack Dongarra,et al.
PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing
,
1995
.
[2]
Daniel A. Reed,et al.
Stencils and Problem Partitionings: Their Influence on the Performance of Multiple Processor Systems
,
1987,
IEEE Transactions on Computers.
[3]
Anthony Skjellum,et al.
Using MPI - portable parallel programming with the message-parsing interface
,
1994
.
[4]
Mark J. Clement,et al.
Performance comparison of desktop multiprocessing and workstation cluster computing
,
1996,
Proceedings of 5th IEEE International Symposium on High Performance Distributed Computing.
[5]
R. M. Zacharias,et al.
Using hundreds of workstations for production running of parallel CFD applications
,
1996
.
[6]
M. J. Quinn,et al.
Parallel Computing: Theory and Practice
,
1994
.