Implementations of Very Large Recurrent ANNs on Massively Parallel SIMD Computers

Abstract We have reviewed the possibilities for optimisations in the SANS model [4] and have used those flndings to implement programs that run very large recurrent networks, 8–16K units in the fully connected case, effectively on an 8K Connection Machine (CM). Here we make use of the assumption that our coding is sparse and consistently optimise our implementations of the recurrent network onto the massively parallel machine. To make the programs optimal for different applications, we have implemented programs for both fully and sparsely connected networks. The implementation for the sparsely connected case uses a compacted weight matrix. During relaxation both implementations make use of the fact that the patterns have sparse activity.

[1]  Shun-ichi Amari,et al.  Characteristics of sparsely encoded associative memory , 1989, Neural Networks.

[2]  Örjan Ekeberg,et al.  A One-Layer Feedback Artificial Neural Network with a Bayesian Learning Rule , 1989, Int. J. Neural Syst..

[3]  Örjan Ekeberg,et al.  Reliability and Speed of Recall in an Associative Network , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.