Silicon implementation of self-learning neural networks
暂无分享,去创建一个
The chip developed uses 1.0- mu m CMOS technology and integrates 336 neurons and 28 K synapses, equivalent to 56 K symmetrical connections. The branch-neuron-unit (BNU) architecture employed in this chip enables interconnection of up to 200 chips based on the assumption of a 30% firing rate and 1% fluctuation of each neuron unit. In this method, the speed is independent of the number of interconnected chips. Interconnection of 200 chips realizes a neural network system with almost 3300 neurons and 5.6 M synapses (11.2 M symmetrical connections). The BNU architecture employed in this chip permits network expansion without performance degradation or complexity increase in the chip design.<<ETX>>
[1] Geoffrey E. Hinton,et al. A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..
[2] S. Kayano,et al. A self-learning neural network chip with 125 neurons and 10 K self-organization synapses , 1990 .