The deterministic Boltzmann machine (DBM) is a neural network architecture that has a simple, local learning rule, making it an ideal candidate for implementation in massively parallel digital or analog VLSI hardware. The ability of the learning rule to compensate for hardware deficiencies simplifies a hardware implementation. This paper explores the effects of various nonideal analog hardware characteristics and explains which of these effects are automatically compensated for by the learning algorithm, and which must be carefully avoided during hardware design. It is found that moderate levels of most nonideal characteristics are tolerated well. However, in the case of continuous learning of capacitive weights, the DBM cannot tolerate weight storage capacitor charge decay and offsets in the learning circuitry because they may cause the synaptic weight values to drift, resulting in oscillation. These effects can be alleviated by incorporating learning thresholds into the adaptation circuitry, at the expense of increased residual errors.
[1]
Robert B. Allen,et al.
Relaxation Networks for Large Supervised Learning Problems
,
1990,
NIPS.
[2]
Geoffrey E. Hinton,et al.
A Learning Algorithm for Boltzmann Machines
,
1985,
Cogn. Sci..
[3]
T. Yamada,et al.
A self-learning neural network chip with 125 neurons and 10 K self-organization synapses
,
1990,
Digest of Technical Papers., 1990 Symposium on VLSI Circuits.
[4]
Carver Mead,et al.
Analog VLSI and neural systems
,
1989
.
[5]
Carsten Peterson,et al.
Explorations of the mean field theory learning algorithm
,
1989,
Neural Networks.
[6]
Pierre Baldi,et al.
Contrastive Learning and Neural Oscillations
,
1991,
Neural Computation.
[7]
H. C. Card,et al.
Analog CMOS deterministic Boltzmann circuits
,
1993
.
[8]
Geoffrey E. Hinton.
Deterministic Boltzmann Learning Performs Steepest Descent in Weight-Space
,
1989,
Neural Computation.