The authors present mathematical results related to recent neural network algorithms employing stochastic pulse encoding. In such algorithms, neural activations and connection weights are encoded as stochastic streams of pulses, where the average density represents the signal or weight value. The authors show the precise form of expected output for two- and three-input neurons, and describe these functions in the limit for a large number of inputs. They address a fundamental limitation inherent in these stochastic techniques: their finite precision. The precision is dependent upon the pulse averaging period-the longer this period (i.e. the larger the number of pulses sampled), the higher the precision. The authors derived exact expressions for the distribution of neural periods as well as a statistical analysis to find the averaging period required for precision of five bits-a resolution determined by others to be necessary for successful implementations of backpropagation. It is found that approximately=1000 pulses are required for 5-b precision. These results reveal fundamental limits in speed and memory requirements for stochastic pulse implementations of neural learning algorithms.<<ETX>>
[1]
Jenq-Neng Hwang,et al.
Finite Precision Error Analysis of Neural Network Hardware Implementations
,
1993,
IEEE Trans. Computers.
[2]
Carver Mead,et al.
Analog VLSI and neural systems
,
1989
.
[3]
Mark A. Holler,et al.
VLSI Implementations of Learning and Memory Systems: A Review
,
1990,
NIPS 1990.
[4]
Max Stanford Tomlinson,et al.
A digital neural network architecture for VLSI
,
1990,
1990 IJCNN International Joint Conference on Neural Networks.
[5]
Geoffrey E. Hinton,et al.
Learning representations by back-propagating errors
,
1986,
Nature.
[6]
Toshiyuki Furuta,et al.
Neural network LSI chip with on-chip learning
,
1991,
IJCNN-91-Seattle International Joint Conference on Neural Networks.
[7]
John Shawe-Taylor,et al.
Probabilistic Bit Stream Neural Chip: Theory
,
1991
.