A dedicated massively parallel architecture for the Boltzmann machine

A key task for neural network research is the development of neurocomputers able to speed-up the learning algorithms to allow their application and test in real cases. This paper shows a massive parallel architecture specifically designed to support the Boltzmann machine neural network. The heart of this architecture is its simplicity and reliability together with a low implementation cost. Despite the impressive speedup obtained by accelerating the standard BM algorithm the architecture does not use particular techniques to expose parallelism in the simulating annealing task, such as the change of state of multiple neurons. Features of the architecture include: (1) speed: the architecture allows a speedup of N (N is the number neurons constituting the BM) with respect to standard implementation on sequential machines; (2) low cost: the architecture requires the same amount of memory of a sequential application, the only additional cost is due to the inclusion of an adder for each neuron; (3) WSI capabilities: the processor interconnection is limited to a single bus for any number of implemented processors, the architecture is scalable in terms of number of processors without any software or hardware modification, the simplicity of the processors enables to implement built-in self-test techniques: (4) High weight dynamics: the architecture performs computation by using 32-bit integer values, therefore offering a wide range of variability of weights.