Fault tolerance via weight noise in analog VLSI implementations of MLPs-a case study with EPSILON

Training with weight noise has been shown to be an effective means of improving the fault tolerance of multilayer perceptrons (MLPs). This paper investigates the use of weight noise used during MLP training to compensate for the inherent errors encountered in VLSI implementations. Weight adaptation is conducted solely in software, eliminating the need for costly in-the-loop training. The particular VLSI implementation considered here is the EPSILON processor card developed at Edinburgh University. Both software and hardware experiments demonstrate the effectiveness of this approach. This case study with the EPSILON processor card highlights what we believe to be a number of common inadequacies with custom designed hardware. In particular, the limitations of EPSILON in terms of its dynamic range performance has been shown to be a problem. In summary, we show that networks trained with weight noise are fault-tolerant but also require an increased dynamic range to exploit this property.

[1]  Guozhong An,et al.  The Effects of Adding Noise During Backpropagation Training on a Generalization Performance , 1996, Neural Computation.

[2]  Torsten Lehmann,et al.  Hardware Learning in Analogue VLSI Neural Networks , 1995 .

[3]  Alan F. Murray,et al.  Competence Acquisition in an Autonomous Mobile Robot using Hardware Neural Techniques , 1995, NIPS.

[4]  Alan F. Murray,et al.  Asynchronous arithmetic for VLSI neural systems , 1987 .

[5]  Andrew J. Holmes,et al.  The use of non-volatile a-Si:H memory devices for synaptic weight storage in artificial neural networks , 1995 .

[6]  C. Lee Giles,et al.  An analysis of noise in recurrent neural networks: convergence and generalization , 1996, IEEE Trans. Neural Networks.

[7]  Alan F. Murray,et al.  Generic Analog Neural Computation - The Epsilon Chip , 1992, NIPS.

[8]  Alan F. Murray,et al.  Analogue imprecision in MLP training , 1996 .

[9]  Martin D. Emmerson Fault tolerance and redundancy in neural networks , 1992 .

[10]  C. H. Sequin,et al.  Fault tolerance in artificial neural networks , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[11]  Dhananjay S. Phatak,et al.  Complete and partial fault tolerance of feedforward neural nets , 1995, IEEE Trans. Neural Networks.

[12]  Stephen Churcher,et al.  VLSI neural networks for computer vision , 1993 .

[13]  Michael J. Carter,et al.  Comparative Fault Tolerance of Parallel Distributed Processing Networks , 1994, IEEE Trans. Computers.

[14]  Alan F. Murray,et al.  Can deterministic penalty terms model the effects of synaptic weight noise on network fault-tolerance? , 1995, Int. J. Neural Syst..

[15]  Simon M. Tam,et al.  Implementation and performance of an analog nonvolatile neural network , 1993 .

[16]  Silvio P. Eberhardt,et al.  Considerations For Hardware Implementations Of Neural Networks , 1988, Twenty-Second Asilomar Conference on Signals, Systems and Computers.

[17]  Geoffrey Bruce Jackson,et al.  Hardware neural systems for applications : a pulsed analog approach , 1996 .

[18]  Alan F. Murray,et al.  Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training , 1994, IEEE Trans. Neural Networks.

[19]  A. F. Murray,et al.  Modelling weight- and input-noise in MLP learning , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).

[20]  Lars Asplund,et al.  Neural networks for admission control in an ATM network , 1994 .

[21]  Thomas Jackson,et al.  Neural Computing - An Introduction , 1990 .

[22]  Alan F. Murray,et al.  Pulse Stream Vlsi Circuits And Systems: The Epsilon Neural Network Chipset , 1993, Int. J. Neural Syst..