Neural Random Access Machines Optimized by Differential Evolution

Recently a research trend of learning algorithms by means of deep learning techniques has started. Most of these are different implementations of the controller-interface abstraction: they use a neural controller as a “processor" and provide different interfaces for input, output and memory management. In this trend, we consider of particular interest the Neural Random-Access Machines, called NRAM, because this model is also able to solve problems which require indirect memory references. In this paper we propose a version of the Neural Random-Access Machines, where the core neural controller is trained with Differential Evolution meta-heuristic instead of the usual backpropagation algorithm. Some experimental results showing that this approach is effective and competitive are also presented.