In high-performance computing, parallel and distributed systems enhance performance by devoting multiple processors to a single problem. However, the performance of such systems is often less than optimal due to synchronization problems, communication overhead I/O delays, and inefficient algorithms. This paper examines the effects of synchronization and communication on execution times for different categories of algorithmic structures. Our results show that the algorithmic structure used can have an impact on program execution time even when communication times are assumed to be zero. With non-zero communication times, the asynchronous and nearest-neighbor structures show relatively little performance degradation while the synchronous and asynchronous master-slave structures demonstrate a large decrease in performance. Finally, we present and evaluate theoretical bounds on execution time for these structures.<<ETX>>
[1]
Stephen F. Lundstrom,et al.
Predicting Performance of Parallel Computations
,
1990,
IEEE Trans. Parallel Distributed Syst..
[2]
Joseph Mohan.
Performance of parallel programs
,
1984
.
[3]
K. Mani Chandy,et al.
Parallel programming in 2001
,
1991,
IEEE Software.
[4]
Dan C. Marinescu,et al.
Synchronization of Nonhomogeneous Parallel Computations
,
1987,
PPSC.
[5]
Geoffrey Fox.
Concurrent Processing for Scientific Calculations
,
1984,
COMPCON.
[6]
Dan C. Marinescu,et al.
Synchronization and load imbalance effects in distributed memory multi-processor systems
,
1991,
Concurr. Pract. Exp..