Neural network implementations and speed-up on Massively Parallel machines

Abstract This paper investigates large scale learning algorithms and their implementation on Massively Parallel machines. The system prototype described in this paper is part of an integrated environment for developing neural network applications, consisting of: i) a library of neural models and associated tools and ii) a mapping system responsible for providing generic and efficient implementations on a spectrum of parallel machines ranging from coarse grain MIMD to fine grain, Massively Parallel SIMD machines. We also describe the implementation of standard learning algorithms onto the Distributed Array of Processors (DAP) and show that a speedup of 50 is obtained for a typical pattern recognition application.