Silicon implementation of self-learning neural networks

The chip developed uses 1.0- mu m CMOS technology and integrates 336 neurons and 28 K synapses, equivalent to 56 K symmetrical connections. The branch-neuron-unit (BNU) architecture employed in this chip enables interconnection of up to 200 chips based on the assumption of a 30% firing rate and 1% fluctuation of each neuron unit. In this method, the speed is independent of the number of interconnected chips. Interconnection of 200 chips realizes a neural network system with almost 3300 neurons and 5.6 M synapses (11.2 M symmetrical connections). The BNU architecture employed in this chip permits network expansion without performance degradation or complexity increase in the chip design.<<ETX>>