A GPU-based associative memory using sparse Neural Networks

Associative memories, serving as building blocks for a variety of algorithms, store content in such a way that it can be later retrieved by probing the memory with a small portion of it, rather than with an address as in more traditional memories. Recently, Gripon and Berrou have introduced a novel construction which builds on ideas from the theory of error correcting codes, greatly outperforming the celebrated Hopfield Neural Networks in terms of the number of stored messages per neuron and the number of stored bits per synapse. The work of Gripon and Berrou proposes two retrieval rules, SUM-OF-SUM and SUM-OF-MAX. In this paper, we implement both rules on a general purpose graphical processing unit (GPU). SUM-OF-SUM uses only matrix-vector multiplication and is easily implemented on the GPU, whereas SUM-OF-MAX, which involves non-linear operations, is much less straightforward to fulfill. However, SUM-OF-MAX gives significantly better retrieval error rates. We propose a hybrid scheme tailored for implementation on a GPU which achieves a 880-fold speedup without sacrificing any accuracy.

[1]  Alexander Moopenn,et al.  Electronic Implementation of Associative Memory Based on Neural Network Models , 1987, IEEE Transactions on Systems, Man, and Cybernetics.

[2]  Vincent Gripon,et al.  Learning sparse messages in networks of neural cliques , 2012, ArXiv.

[3]  David Willshaw,et al.  Models of distributed associative memory , 1971 .

[4]  Vincent Gripon,et al.  A Massively Parallel Associative Memory Based on Sparse Neural Networks , 2013, ArXiv.

[5]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[6]  Vincent Gripon,et al.  A simple and efficient way to store many messages using neural cliques , 2011, 2011 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB).

[7]  Conrad Sanderson,et al.  Armadillo: An Open Source C++ Linear Algebra Library for Fast Prototyping and Computationally Intensive Experiments , 2010 .

[8]  Vincent Gripon,et al.  Sparse Neural Networks With Large Learning Diversity , 2011, IEEE Transactions on Neural Networks.

[9]  Claude Berrou,et al.  Coded Hopfield networks , 2010, 2010 6th International Symposium on Turbo Codes & Iterative Information Processing.

[10]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[11]  Claude Berrou,et al.  Storing Sparse Messages in Networks of Neural Cliques , 2014, IEEE Transactions on Neural Networks and Learning Systems.

[12]  Claude Berrou,et al.  Learning long sequences in binary neural networks , 2012 .

[13]  Fabrice Seguin,et al.  Analog implementation of encoded neural networks , 2013, 2013 IEEE International Symposium on Circuits and Systems (ISCAS2013).

[14]  H. C. LONGUET-HIGGINS,et al.  Non-Holographic Associative Memory , 1969, Nature.

[15]  Vincent Gripon,et al.  Nearly-optimal associative memories based on distributed constant weight codes , 2012, 2012 Information Theory and Applications Workshop.

[16]  Vincent Gripon,et al.  Sparse structured associative memories as efficient set-membership data structures , 2013, 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[17]  Vincent Gripon,et al.  Architecture and implementation of an associative memory using sparse clustered networks , 2012, 2012 IEEE International Symposium on Circuits and Systems.