Analog Neural Networks of Limited Precision I: Computing with Multilinear Threshold Functions

Experimental evidence has shown analog neural networks to be ex~mely fault-tolerant; in particular. their performance does not appear to be significantly impaired when precision is limited. Analog neurons with limited precision essentially compute k-ary weighted multilinear threshold functions. which divide R" into k regions with k-l hyperplanes. The behaviour of k-ary neural networks is investigated. There is no canonical set of threshold values for k>3. although they exist for binary and ternary neural networks. The weights can be made integers of only 0 «z +k ) log (z +k » bits. where z is the number of processors. without increasing hardware or running time. The weights can be made ±1 while increasing running time by a constant multiple and hardware by a small polynomial in z and k. Binary neurons can be used if the running time is allowed to increase by a larger constant multiple and the hardware is allowed to increase by a slightly larger polynomial in z and k. Any symmetric k-ary function can be computed in constant depth and size o (n k1/(k-2)!). and any k-ary function can be computed in constant depth and size 0 (nk"). The alternating neural networks of Olafsson and Abu-Mostafa. and the quantized neural networks of Fleisher are closely related to this model. Analog Neural Networks of Limited Precision I 703

[1]  Michael Fleisher The Hopfield Model with Multi-Level Neurons , 1987, NIPS.

[2]  S. Muroga,et al.  Theory of majority decision elements , 1961 .

[3]  Uzi Vishkin,et al.  Constant Depth Reducibility , 1984, SIAM J. Comput..

[4]  Sverrir Olafsson,et al.  The Capacity of Multilevel Threshold Functions , 1988, IEEE Trans. Pattern Anal. Mach. Intell..