Mapping high-performance RNNs to in-memory neuromorphic chips
暂无分享,去创建一个
[1] Giacomo Indiveri,et al. An Ultra-Low Power Sigma-Delta Neuron Circuit , 2019, 2019 IEEE International Symposium on Circuits and Systems (ISCAS).
[2] Dit-Yan Yeung,et al. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting , 2015, NIPS.
[3] Shuai Li,et al. Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[4] Rahul Sarpeshkar,et al. A Low-Power Wide-Dynamic-Range Analog VLSI Cochlea , 1998 .
[5] Geoffrey E. Hinton,et al. A Simple Way to Initialize Recurrent Networks of Rectified Linear Units , 2015, ArXiv.
[6] Wulfram Gerstner,et al. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. , 2005, Journal of neurophysiology.
[7] Hesham Mostafa,et al. Supervised Learning Based on Temporal Coding in Spiking Neural Networks , 2016, IEEE Transactions on Neural Networks and Learning Systems.
[8] Giacomo Indiveri,et al. A neuromorphic systems approach to in-memory computing with non-ideal memristive devices: From mitigation to exploitation , 2018, Faraday discussions.
[9] Matthew Cook,et al. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).
[10] Gabor C. Temes,et al. Understanding Delta-Sigma Data Converters , 2004 .
[11] Giacomo Indiveri,et al. A Scalable Multicore Architecture With Heterogeneous Memory Structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs) , 2017, IEEE Transactions on Biomedical Circuits and Systems.
[12] Hong Wang,et al. Loihi: A Neuromorphic Manycore Processor with On-Chip Learning , 2018, IEEE Micro.
[13] John von Neumann,et al. First draft of a report on the EDVAC , 1993, IEEE Annals of the History of Computing.
[14] Wulfram Gerstner,et al. SPIKING NEURON MODELS Single Neurons , Populations , Plasticity , 2002 .
[15] Michael C. Mozer,et al. Induction of Multiscale Temporal Structure , 1991, NIPS.
[16] Tom Shanley. x86 Instruction Set Architecture , 2010 .
[17] Shih-Chii Liu,et al. Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences , 2016, NIPS.
[18] Joel Emer,et al. Eyeriss: an Energy-efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks Accessed Terms of Use , 2022 .
[19] Razvan Pascanu,et al. On the difficulty of training recurrent neural networks , 2012, ICML.
[20] Rachata Ausavarungnirun,et al. Enabling the Adoption of Processing-in-Memory: Challenges, Mechanisms, Future Research Directions , 2018, ArXiv.
[21] Rahul Sarpeshkar,et al. Analog Versus Digital: Extrapolating from Electronics to Neurobiology , 1998, Neural Computation.
[22] Alexander M. Rush,et al. Character-Aware Neural Language Models , 2015, AAAI.
[23] Jürgen Schmidhuber,et al. Learning to Forget: Continual Prediction with LSTM , 2000, Neural Computation.
[24] Pete Warden,et al. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition , 2018, ArXiv.
[25] M. Mitchell Waldrop,et al. The chips are down for Moore’s law , 2016, Nature.
[26] Osman S. Unsal,et al. System-level power estimation tool for embedded processor based platforms , 2014, RAPIDO '14.
[27] Jason Weston,et al. Curriculum learning , 2009, ICML '09.
[28] Shih-Chii Liu,et al. AER EAR: A Matched Silicon Cochlea Pair With Address Event Representation Interface , 2007, IEEE Trans. Circuits Syst. I Regul. Pap..
[29] Alex Graves,et al. Neural Turing Machines , 2014, ArXiv.
[30] R. Schaller,et al. Moore's law: past, present and future , 1997 .
[31] Shih-Chii Liu,et al. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification , 2017, Front. Neurosci..
[32] Yoshua Bengio,et al. Unitary Evolution Recurrent Neural Networks , 2015, ICML.
[33] Luca Benini,et al. Origami: A 803-GOp/s/W Convolutional Network Accelerator , 2015, IEEE Transactions on Circuits and Systems for Video Technology.
[34] David Bol,et al. A 0.086-mm$^2$ 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS , 2018, IEEE Transactions on Biomedical Circuits and Systems.
[35] Young C. Yoon,et al. LIF and Simplified SRM Neurons Encode Signals Into Spikes via a Form of Asynchronous Pulse Sigma–Delta Modulation , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[36] Giacomo Indiveri,et al. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses , 2015, Front. Neurosci..
[37] Johannes Schemmel,et al. Live demonstration: A scaled-down version of the BrainScaleS wafer-scale neuromorphic system , 2012, 2012 IEEE International Symposium on Circuits and Systems.
[38] Romain Brette,et al. Brian 2, an intuitive and efficient neural simulator , 2019, eLife.
[39] Marc-Oliver Gewaltig,et al. NEST (NEural Simulation Tool) , 2007, Scholarpedia.
[40] Travis E. Oliphant,et al. Guide to NumPy , 2015 .
[41] Fei Tian,et al. Recurrent Residual Learning for Sequence Classification , 2016, EMNLP.
[42] Sander M. Bohte,et al. Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model , 2012, NIPS.
[43] M.J.M. Pelgrom,et al. Matching properties of MOS transistors , 1989 .
[44] Tara N. Sainath,et al. Convolutional neural networks for small-footprint keyword spotting , 2015, INTERSPEECH.
[45] Randall D. Beer,et al. On the Dynamics of Small Continuous-Time Recurrent Neural Networks , 1995, Adapt. Behav..
[46] Yu Wang,et al. PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).
[47] Vladlen Koltun,et al. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling , 2018, ArXiv.
[48] Ann Bies,et al. The Penn Treebank: Annotating Predicate Argument Structure , 1994, HLT.
[49] Herbert Jaeger,et al. Optimization and applications of echo state networks with leaky- integrator neurons , 2007, Neural Networks.
[50] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[51] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[52] Bernard Brezzo,et al. TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip , 2015, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.
[53] Kiyoung Choi,et al. PIM-enabled instructions: A low-overhead, locality-aware processing-in-memory architecture , 2015, 2015 ACM/IEEE 42nd Annual International Symposium on Computer Architecture (ISCA).
[54] Kwabena Boahen,et al. Braindrop: A Mixed-Signal Neuromorphic Architecture With a Dynamical Systems-Based Programming Model , 2019, Proceedings of the IEEE.
[55] Yann LeCun,et al. Recurrent Orthogonal Networks and Long-Memory Tasks , 2016, ICML.
[56] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[57] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[58] Chiara Bartolozzi,et al. Neuromorphic Electronic Circuits for Building Autonomous Cognitive Systems , 2014, Proceedings of the IEEE.
[59] Shih-Chii Liu,et al. Overcoming the vanishing gradient problem in plain recurrent networks , 2018, ArXiv.
[60] Yoshua Bengio,et al. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.
[61] Craig T. Jin,et al. An Active 2-D Silicon Cochlea , 2008, IEEE Transactions on Biomedical Circuits and Systems.