Overview of parallel processing

We are on the threshold of a new era in computer architecture. It is becoming increasingly difficult to obtain more performance from the time-honored von Neumann model, and many of the technological constraints that influenced its design over thirty years ago have changed drastically. Many of the arguments for processing a single instruction at a time no longer apply, and a number of enthusiastic parallel processing projects are working on various ways to allow many processors to work on a single problem at the same time. However, this re-opens a Pandora's box of questions about how computation should be done, and some of the strengths of the von Neumann model which temporarily closed this box three decades ago become especially apparent when one tries to replace it. This overview treats the promises and accomplishments of parallel processing as well as the problems and work that remain. The paper is organized as follows: Current driving forces for parallel processing; Definitions and fundamental questions; Survey of projects; Emerging answers. As will be shown, the field is at an interesting juncture. Much work has been done, and the ideas are now there for putting it all together. But some large experiments are needed to provide real results from real programs if the pace of progress is to be maintained.