Special Issue on Multicomputer Programming and Applications - Guest Editor's Introduction

The emergence of multicomputers during the past decade is a major achievement in the area of parallel processing. Multicomputers are of a class of MIMD multiprocessor systems. These systems are organized as ensembles of nodes, where each node is a computer with its own processor and local memory, connected by a messagepassing network. Multicomputers are scalable in the sense that it is relatively easy for expansion to include a large number of processors. Hence, multicomputers can provide massive parallelism for many applications. Due to their distributed memory architectures, multicomputers are also referred to as distributed-memory multiprocessors. A variety of multicomputers has been commercially available. The major research topic is how to fully utilize these multicomputers. Further, how a parallel program with possible maximum parallelism is written and how the parallel program can be efficiently executed on a multicomputer play an important role in attaining high performance on a multicomputer. To achieve this goal, a well-developed programming environment must be provided. Moreover, the programming tools embedded in the programming environment will help the programmer write more efficient programs to be executed on a multicomputer. In addition, parallelizing compilers can reduce the programmer’s effort by automatically detecting the inherent parallelism in a program and then generating parallel codes for the program to take advantage of the massive parallelism of the multicomputer. In fact, the parallelism of a program is just one of the factors affecting the performance. Essentially, the message-passing mechanism of a multicomputer has a great impact on the system performance, especially for a large-scale multicomputer. Hence, multicomputer programming must also take into consideration the communication overhead incurred by the message-passing mechanism. In addition, the success of a multicomputer system also relies on the possible applications of the system. There are still a number of important topics to be further explored in multicomputer programming such as load balancing, task scheduling, the cache problem, and multicomputer operating systems. In this special issue, eight papers were accepted for publication. The first paper, by Craig M. Chase, Alex L. Cheng, Anthony P. Reeves, and Mark R. Smith, entitled “Paragon: A Parallel Programming Environment for Scientific Applications Using Communication Structures,” introduces the Paragon project, which is directed toward the identification and exploration of programming methodologies for the development of large-scale scientific applications of parallel computers. The parallel programming environment consists of a data-parallel programming language and a flexible run-time environment . The paper “SPMD Execution of Programs with Pointer-Based Data Structures on Distributed-Memory Machines,” by Rajiv Gupta, proposes an approach for supporting SPMD execution of programs with pointerbased data structures on distributed-memory machines. A language and compiler support for SPMD execution of the programs is described. Further, the problem of supporting dynamic data structures is addressed. The paper “Tiling Multidimensional Iteration Spaces for Multicomputers,” written by J. Ramanujam and P. Sadayappan, discusses the problem of compiling multiply nested loops for multicomputers. This paper presents a method of aggregating a number of loop iterations into tiles where the tiles execute atomically. The methodology must consider the possible deadlock problem when partitioning the iteration space into tiles. Y. C. Lin and Y. H. Chengs’ paper, entitled “Automatic Generation of Parallel Occam Programs for Transputer Rings,” proposes a novel approach to translating FP programs into parallel Occam programs for execution on rings of transputers. In this paper, an FP-to-Occam translator for ring of transputers, FORT, is presented. The basic implementation method of array algorithms in Occam is also discussed. In the paper “A Flexible Causal Broadcast Communication Interface for Distributed Applications,” coauthored by K. Ravindra and S. Samdarshi, a causal broadcast communication interface is described. The interface allows distributed applications to flexibly and uniformly specify message ordering requirements. This paper uses