Sparse distributed memory implementations on tree shape parallel neurocomputer

This paper presents two different realizations of a sparse distributed memory (SDM) model. For parallelization purposes, addressing, storage and retrieval operations are explained in detail and some existing implementations in various computing platforms are considered before introducing the tree shaped parallel computer, TUTNC (Tampere University of Technology Neural Computer). The architecture and main features of TUTNC are presented in order to map SDM to the system in columnwise and rowwise manner. Mappings are compared in terms of measured execution time with different parameter sets. Speedup and performance estimations are also given for a larger system. The results show, that SDM can be well parallelized in TUTNC.