Parallel methods for implementations of neural networks

Summary form only given. The inherent parallelism available in a neural network structure seems to indicate a simple method for parallel implementation in hardware. Unfortunately, the diversity in the type of neural network, limited analytical data on their computational requirements, and demanding communication requirements have all been significant impediments to the development of a general-purpose massively parallel neurocomputer. The authors have established a basic taxonomy of neural network implementations based on the granularity of parallelism exploited. A detailed analysis of the possible sources of parallelism in neural network models, along with, architectural characteristics and their effective use in neural computation, was carried out with each class of implementations. This analysis is intended to be used as a framework for the design of future neurocomputer systems.<<ETX>>