Networks of Neural Cliques
暂无分享,去创建一个
We propose and develop an original model of associative memories relying on coded neural networks. Associative memories are devices able to learn messages then to retrieve them from part of their contents. The state-of-the-art model in terms of efficiency (ratio of the amount of bits learned to the amount of bits used) is the Hopfield Neural Network, whose learning diversity - the number of mes- sages it can store - is lower than n 2 log(n) where n is the number of neurons in the network. Our work consists of using error correcting coding and decoding techniques, more precisely distributed codes, to considerably increase the performance of associative memories. To achieve this, we introduce original codes whose code- words rely on neural cliques. We show that, combined with sparse local codes, these neural cliques offer a learning diversity which grows quadratically with the number of neurons. The observed gains come from the use of sparsity at several levels: learned messages length is much shorter than n, and they only use part of the avail- able material both in terms of neurons and connections. The learning process is therefore local, contrary to the Hopfield model. Moreover, these memories of- fer an efficiency nearly optimal. Therefore they appear to be a very interesting alternative to classical indexed memories. Beside the performance aspects, the proposed model offer much greater bio- logical plausibility than the Hopfield one. Indeed, the concepts of neural cliques, winner-take-all, or even temporal synchronization that we introduce into our networks match recent observations found in the neurobiological literature. More- over, since neural cliques are intertwined by their vertices and/or their connec- tions, the proposed model offers new perspectives for the design of cognitive ma- chines able to cross pieces of information in order to produce new ones.