Optimum simulations among parallel computer networks

A number of computer network architectures have been proposed recently. The problem of simulating one network by another is important for the following reasons: (a) It enables us to compare and evaluate the computing power of different network architectures. (b) It enables us to save time by not rewriting programs. For instance, a program may have been explicitly written for network A, but we have a different parallel network B. By assigning processors of B the roles of A's processors allows the program to be run on B. (c) As networks can be regarded as graphs, with each processor being a node and a connection between two processors being a edge, network embedding (simulation) has important applications in solving graph problems. (d) Network embeddings can be applied directly to the problem of mapping logical data structures into computer memories. For the above reasons, the problem of embedding one network into another has been intensively studied. In this work, we study the problem of simulations among networks with natural architectures, namely meshes, as well as some other popular networks, namely hypercubes and star networks. The objectives of our embedding algorithms are to evenly distribute guest processors among host processors and to minimize communication costs. The hypercube network is one of the most popular parallel architectures. We show that the optimum simulation of meshes by hypercubes can be achieved in most cases. Several embeddings of star networks to hypercubes are also developed, which present a interesting marked trade-off between communication delays and sizes of the host. It is shown that optimum or nearly optimum simulation can be achieved for practical cases. An dilation 4 embedding of 2-dimensional meshes into star networks is developed. We extend the previous results for special cases to the general case. Like hypercubes, star networks also simulate meshes efficiently.