Threshold non-linearity effects on weight-decay tolerance in analog neural networks

It is shown that one of the VLSI design issues that must be considered is the temperature of the activation function used in the neurons. Though low temperature activation functions (such as simple comparators) are efficient to implement in hardware and use minimal power, the overall accuracy of the network may suffer. Nonstandard training schemes that move the synaptic sums away from the transition region or higher-temperature activation functions must be used. The influence of activation temperature on artificial neural network (ANN) robustness when experiencing weight decay is presented. A simulator design which is specifically tailored to studying analog VLSI ANN implementations is presented. Data resulting from training a typical backpropagation network using various temperature activation functions and tracking the network's sensitivity to weight decay are presented.<<ETX>>