A key task for neural network research is the development of neurocomputers able to speed-up the learning algorithms to allow their application and test in real cases. This paper shows a massive parallel architecture specifically designed to support the Boltzmann machine neural network.
The heart of this architecture is its simplicity and reliability together with a low implementation cost. Despite the impressive speedup obtained by accelerating the standard BM algorithm the architecture does not use particular techniques to expose parallelism in the simulating annealing task, such as the change of state of multiple neurons.
Features of the architecture include: (1) speed: the architecture allows a speedup of N (N is the number neurons constituting the BM) with respect to standard implementation on sequential machines; (2) low cost: the architecture requires the same amount of memory of a sequential application, the only additional cost is due to the inclusion of an adder for each neuron; (3) WSI capabilities: the processor interconnection is limited to a single bus for any number of implemented processors, the architecture is scalable in terms of number of processors without any software or hardware modification, the simplicity of the processors enables to implement built-in self-test techniques: (4) High weight dynamics: the architecture performs computation by using 32-bit integer values, therefore offering a wide range of variability of weights.
[1]
Geoffrey E. Hinton,et al.
Learning and relearning in Boltzmann machines
,
1986
.
[2]
Geoffrey E. Hinton,et al.
Learning symmetry groups with hidden units: beyond the perceptron
,
1986
.
[3]
Emile H. L. Aarts,et al.
Combinatorial Optimization on a Boltzmann Machine
,
1989,
J. Parallel Distributed Comput..
[4]
Emile H. L. Aarts,et al.
Computations in massively parallel networks based on the Boltzmann machine: a review
,
1989,
Parallel Comput..
[5]
Geoffrey E. Hinton,et al.
A Learning Algorithm for Boltzmann Machines
,
1985,
Cogn. Sci..
[6]
Emile H. L. Aarts,et al.
Simulated Annealing: Theory and Applications
,
1987,
Mathematics and Its Applications.
[7]
J J Hopfield,et al.
Neural networks and physical systems with emergent collective computational abilities.
,
1982,
Proceedings of the National Academy of Sciences of the United States of America.
[8]
Emile H. L. Aarts,et al.
Boltzmann Machines and their Applications
,
1987,
PARLE.