Parallel Implementations of Neural Networks

Neural network models have attracted much attention recently by demonstrating their potential at being an effective paradigm for implementing human-like intelligent processing. Neural network models, applied to “real-world” problems, demand high processing rates. Fortunately, neural network models contain several inherently parallel computing structures which can be utilized for high throughput implementations on parallel processing architectures. In this paper we describe the basic computational requirements and the various interconnection structures that are used by neural network models. A number of inherently parallel aspects of neural computing are described in detail along with a description of their specific demands on the supporting parallel processing architecture. The main obstacle in achieving efficient parallel implementations of neural networks is shown to be associated with the difficulty in efficiently supporting the complex and widely differing interconnection structures used by various neural network models. In this paper we survey several proposed implementation techniques organized based on a taxonomy of neural network interconnection structures.