A self-adaptive hardware with resistive switching synapses for experience-based neurocomputing

[1]  Lidong Yang,et al.  Autonomous environment-adaptive microrobot swarm navigation enabled by deep learning-based real-time distribution planning , 2022, Nature Machine Intelligence.

[2]  S. Masoudi,et al.  Combining CNN and Q-learning for increasing the accuracy of lost gamma source finding , 2022, Scientific Reports.

[3]  D. Ielmini,et al.  In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire networks , 2021, Nature Materials.

[4]  S. Bianchi,et al.  A Bio-Inspired Recurrent Neural Network with Self-Adaptive Neurons and PCM Synapses for Solving Reinforcement Learning Tasks , 2020, 2020 IEEE International Symposium on Circuits and Systems (ISCAS).

[5]  S. Bianchi,et al.  Hardware Implementation of PCM-Based Neurons with Self-Regulating Threshold for Homeostatic Scaling in Unsupervised Learning , 2020, 2020 IEEE International Symposium on Circuits and Systems (ISCAS).

[6]  G. Molas,et al.  A SiOx, RRAM-based hardware with spike frequency adaptation for power-saving continual learning in convolutional neural networks , 2020, 2020 IEEE Symposium on VLSI Technology.

[7]  Stefano Ambrogio,et al.  A Compact Model for Stochastic Spike-Timing-Dependent Plasticity (STDP) Based on Resistive Switching Memory (RRAM) Synapses , 2020, IEEE Transactions on Electron Devices.

[8]  Bin Gao,et al.  Fully hardware-implemented memristor convolutional neural network , 2020, Nature.

[9]  M. R. Mahmoodi,et al.  Versatile stochastic dot product circuits based on nonvolatile memories for high performance neurocomputing and neurooptimization , 2019, Nature Communications.

[10]  Yao-Lin Huang,et al.  Solving Maze Problem with Reinforcement Learning by a Mobile Robot , 2019, 2019 IEEE International Conference on Computation, Communication and Engineering (ICCCE).

[11]  Andrew McCallum,et al.  Energy and Policy Considerations for Deep Learning in NLP , 2019, ACL.

[12]  Daniele Ielmini,et al.  Unsupervised Learning to Overcome Catastrophic Forgetting in Neural Networks , 2019, IEEE Journal on Exploratory Solid-State Computational Devices and Circuits.

[13]  Steve B. Furber,et al.  Efficient Reward-Based Structural Plasticity on a SpiNNaker 2 Prototype , 2019, IEEE Transactions on Biomedical Circuits and Systems.

[14]  J. Yang,et al.  Memristive crossbar arrays for brain-inspired computing , 2019, Nature Materials.

[15]  Peng Lin,et al.  Reinforcement learning with analogue memristor arrays , 2019, Nature Electronics.

[16]  F. Merrikh Bayat,et al.  Spike-timing-dependent plasticity learning of coincidence detection with passively integrated memristive circuits , 2018, Nature Communications.

[17]  Yuanqing Xia,et al.  A Novel Deep Neural Network Architecture for Mars Visual Navigation , 2018, ArXiv.

[18]  H.-S. Philip Wong,et al.  In-memory computing with resistive switching devices , 2018, Nature Electronics.

[19]  Pritish Narayanan,et al.  Equivalent-accuracy accelerated neural-network training using analogue memory , 2018, Nature.

[20]  Daniele Ielmini,et al.  Resistive switching synapses for unsupervised learning in feed-forward and recurrent neural networks , 2018, 2018 IEEE International Symposium on Circuits and Systems (ISCAS).

[21]  Catherine E. Graves,et al.  Memristor‐Based Analog Computation and Neural Network Classification with a Dot Product Engine , 2018, Advanced materials.

[22]  Shimeng Yu,et al.  Neuro-Inspired Computing With Emerging Nonvolatile Memorys , 2018, Proceedings of the IEEE.

[23]  Mark Sandler,et al.  MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[24]  H.-S. Philip Wong,et al.  Device and circuit optimization of RRAM for neuromorphic computing , 2017, 2017 IEEE International Electron Devices Meeting (IEDM).

[25]  Demis Hassabis,et al.  Mastering the game of Go without human knowledge , 2017, Nature.

[26]  D. Hassabis,et al.  Neuroscience-Inspired Artificial Intelligence , 2017, Neuron.

[27]  Pieter Abbeel,et al.  A Simple Neural Attentive Meta-Learner , 2017, ICLR.

[28]  H.-S. Philip Wong,et al.  Face classification using electronic synapses , 2017, Nature Communications.

[29]  Mykel J. Kochenderfer,et al.  Cooperative Multi-agent Control Using Deep Reinforcement Learning , 2017, AAMAS Workshops.

[30]  Wei D. Lu,et al.  Experimental Demonstration of Feature Extraction and Dimensionality Reduction Using Memristor Networks. , 2017, Nano letters.

[31]  Kevin Fox,et al.  Integrating Hebbian and homeostatic plasticity: introduction , 2017, Philosophical Transactions of the Royal Society B: Biological Sciences.

[32]  Pritish Narayanan,et al.  Neuromorphic computing using non-volatile memory , 2017 .

[33]  Peter L. Bartlett,et al.  RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning , 2016, ArXiv.

[34]  C. Vorhees,et al.  Cincinnati water maze: A review of the development, methods, and evidence as a test of egocentric learning and memory. , 2016, Neurotoxicology and teratology.

[35]  Catherine Graves,et al.  Dot-product engine for neuromorphic computing: Programming 1T1M crossbar to accelerate matrix-vector multiplication , 2016, 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC).

[36]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[37]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[38]  Giacomo Indiveri,et al.  Learning and stabilization of winner-take-all dynamics through interacting excitatory and inhibitory plasticity , 2014, Front. Comput. Neurosci..

[39]  Chung Lam,et al.  Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array , 2014, Front. Neurosci..

[40]  Wolfgang Maass,et al.  Noise as a Resource for Computation and Learning in Networks of Spiking Neurons , 2014, Proceedings of the IEEE.

[41]  Chiara Bartolozzi,et al.  Neuromorphic Electronic Circuits for Building Autonomous Cognitive Systems , 2014, Proceedings of the IEEE.

[42]  Jan Peters,et al.  A Survey on Policy Search for Robotics , 2013, Found. Trends Robotics.

[43]  Wulfram Gerstner,et al.  Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons , 2013, PLoS Comput. Biol..

[44]  Takashi Kubota,et al.  An image based path planning scheme for exploration rover , 2011, 2011 IEEE International Conference on Robotics and Biomimetics.

[45]  D. Ielmini,et al.  Modeling the Universal Set/Reset Characteristics of Bipolar RRAM by Field- and Temperature-Driven Filament Growth , 2011, IEEE Transactions on Electron Devices.

[46]  Shimeng Yu,et al.  An Electronic Synapse Device Based on Metal Oxide Resistive Switching Memory for Neuromorphic Computation , 2011, IEEE Transactions on Electron Devices.

[47]  Gert Cauwenberghs,et al.  Neuromorphic Silicon Neuron Circuits , 2011, Front. Neurosci.

[48]  Marten Scheffer,et al.  Resilience thinking: integrating resilience, adaptability and transformability , 2010 .

[49]  P. Dayan,et al.  States versus Rewards: Dissociable Neural Prediction Error Signals Underlying Model-Based and Model-Free Reinforcement Learning , 2010, Neuron.

[50]  G. Turrigiano The Self-Tuning Neuron: Synaptic Scaling of Excitatory Synapses , 2008, Cell.

[51]  A. McEwen,et al.  Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment (HiRISE) , 2007 .

[52]  M. Miguéns,et al.  Hippocampal Synaptic Plasticity and Water Maze Learning in Cocaine Self‐Administered Rats , 2006, Annals of the New York Academy of Sciences.

[53]  G. Turrigiano Homeostatic plasticity in neuronal networks: the more things change, the more they stay the same , 1999, Trends in Neurosciences.

[54]  Peter Dayan,et al.  A Neural Substrate of Prediction and Reward , 1997, Science.

[55]  Andrew W. Moore,et al.  Reinforcement Learning: A Survey , 1996, J. Artif. Intell. Res..

[56]  J. Peng,et al.  Efficient Learning and Planning Within the Dyna Framework , 1993, IEEE International Conference on Neural Networks.

[57]  P. Dayan,et al.  Q-learning , 1992, Machine Learning.

[58]  Richard S. Sutton,et al.  Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming , 1990, ML.

[59]  D. Amit Modelling Brain Function: The World of Attractor Neural Networks , 1989 .

[60]  Richard S. Sutton,et al.  Learning to predict by the methods of temporal differences , 1988, Machine Learning.

[61]  Stephen Grossberg,et al.  Competitive Learning: From Interactive Activation to Adaptive Resonance , 1987, Cogn. Sci..

[62]  Lambert Schomaker,et al.  An Investigation Into the Effect of the Learning Rate on Overestimation Bias of Connectionist Q-learning , 2021, ICAART.

[63]  Peng Lin,et al.  Fully memristive neural networks for pattern classification with unsupervised learning , 2018 .

[64]  Neuro-Inspired Computing With Emerging Nonvolatile Memory , 2018 .

[65]  Jonathan D. Power,et al.  Neural plasticity across the lifespan , 2017, Wiley interdisciplinary reviews. Developmental biology.

[66]  Peter Vrancx,et al.  Reinforcement Learning: State-of-the-Art , 2012 .

[67]  Marco Wiering,et al.  Reinforcement Learning and Markov Decision Processes , 2012, Reinforcement Learning.

[68]  C. Atkeson,et al.  Prioritized sweeping: Reinforcement learning with less data and less time , 2004, Machine Learning.

[69]  K. Doya Reinforcement Learning in Continuous Time and Space , 2000, Neural Computation.

[70]  Donald E. Kirk,et al.  Optimal control theory : an introduction , 1970 .