Synaptic Weight States in a Locally Competitive Algorithm for Neuromorphic Memristive Hardware

Memristors promise a means for high-density neuromorphic nanoscale architectures that leverage in situ learning algorithms. While traditional learning algorithms commonly assume analog values for synaptic weights, actual physical memristors may have a finite set of achievable states during online learning. In this paper, we simulate a learning algorithm with limitations on both the resolution of its weights and the means of switching between them to explore how these properties affect classification performance. For our experiments, we use the locally competitive algorithm (LCA) by Rozell et al. in conjunction with the MNIST dataset and a set of natural images. We investigate the effects of both linear and non-linear distributions of weight states. Our results show that as long as the weights are distributed roughly close to linear, the algorithm is still effective for classifying digits, while reconstructing images benefits from non-linearity. Further, the resolution required from a device depends on its transition function between states; for transitions akin to round-to-nearest, synaptic weights should have around 16 possible states (4-bit resolution) to obtain optimal results. We find that lowering the threshold required to change states or adding stochasticity to the system can reduce that requirement down to four states (2-bit resolution). The outcomes of our research are relevant for building effective neuromorphic hardware with state-of-the-art memristive devices.

[1]  Michael Robert DeWeese,et al.  A Sparse Coding Model with Synaptically Local Plasticity and Spiking Neurons Can Account for the Diverse Shapes of V1 Simple Cell Receptive Fields , 2011, PLoS Comput. Biol..

[2]  Zhengya Zhang,et al.  Efficient Hardware Architecture for Sparse Coding , 2014, IEEE Transactions on Signal Processing.

[3]  C. Toumazou,et al.  Memristive devices as parameter setting elements in programmable gain amplifiers , 2012 .

[4]  Stefano Fusi,et al.  Long Memory Lifetimes Require Complex Synapses and Limited Sparseness , 2007, Frontiers Comput. Neurosci..

[5]  Timothée Masquelier,et al.  Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity , 2007, PLoS Comput. Biol..

[6]  Gert Cauwenberghs,et al.  Neuromorphic Silicon Neuron Circuits , 2011, Front. Neurosci.

[7]  Jens Bürger,et al.  On the influence of synaptic weight states in a locally competitive algorithm for memristive hardware , 2014, 2014 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH).

[8]  Ligang Gao,et al.  High precision tuning of state for memristive devices by adaptable variation-tolerant algorithm , 2011, Nanotechnology.

[9]  Wei Lu,et al.  Replicating Kernels with a Short Stride Allows Sparse Reconstructions with Fewer Independent Kernels , 2014, ArXiv.

[10]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[11]  Wei Yang Lu,et al.  Nanoscale memristor device as synapse in neuromorphic systems. , 2010, Nano letters.

[12]  Jiantao Zhou,et al.  Stochastic Memristive Devices for Computing and Neuromorphic Applications , 2013, Nanoscale.

[13]  Richard G. Baraniuk,et al.  Sparse Coding via Thresholding and Local Competition in Neural Circuits , 2008, Neural Computation.

[14]  Bernabé Linares-Barranco,et al.  On Spike-Timing-Dependent-Plasticity, Memristive Devices, and Building a Self-Learning Visual Cortex , 2011, Front. Neurosci..

[15]  Morris R. Driels,et al.  Determining the Number of Iterations for Monte Carlo Simulations of Weapon Effectiveness , 2004 .

[16]  Jacques-Olivier Klein,et al.  Bioinspired networks with nanoscale memristive devices that combine the unsupervised and supervised learning approaches , 2012, 2012 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH).