暂无分享,去创建一个
[1] Wulfram Gerstner,et al. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network , 2017, eLife.
[2] Melika Payvand,et al. Error-triggered Three-Factor Learning Dynamics for Crossbar Arrays , 2019, 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS).
[3] Friedemann Zenke,et al. The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.
[4] S. Grossberg,et al. The Adaptive Brain , 1990 .
[5] Michael J. Frank,et al. Making Working Memory Work: A Computational Model of Learning in the Prefrontal Cortex and Basal Ganglia , 2006, Neural Computation.
[6] Erich Elsen,et al. A Practical Sparse Approximation for Real Time Recurrent Learning , 2020, ArXiv.
[7] Jean-Jacques E. Slotine,et al. Collective Stability of Networks of Winner-Take-All Circuits , 2011, Neural Computation.
[8] Uwe Naumann,et al. The Art of Differentiating Computer Programs - An Introduction to Algorithmic Differentiation , 2012, Software, environments, tools.
[9] Rodolphe Sepulchre,et al. Neuronal behaviors: A control perspective , 2015, 2015 54th IEEE Conference on Decision and Control (CDC).
[10] Kaushik Roy,et al. Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures , 2019, Frontiers in Neuroscience.
[11] Jean-Pascal Pfister,et al. Optimal Spike-Timing-Dependent Plasticity for Precise Action Potential Firing in Supervised Learning , 2005, Neural Computation.
[12] H. Sompolinsky,et al. The tempotron: a neuron that learns spike timing–based decisions , 2006, Nature Neuroscience.
[13] F ROSENBLATT,et al. The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.
[14] Mark C. W. van Rossum,et al. A Novel Spike Distance , 2001, Neural Computation.
[15] Andrew S. Cassidy,et al. Convolutional networks for fast, energy-efficient neuromorphic computing , 2016, Proceedings of the National Academy of Sciences.
[16] Uwe Naumann,et al. Optimal Jacobian accumulation is NP-complete , 2007, Math. Program..
[17] L. F. Abbott,et al. Building functional networks of spiking model neurons , 2016, Nature Neuroscience.
[18] W. Senn,et al. Matching Recall and Storage in Sequence Learning with Spiking Neural Networks , 2013, The Journal of Neuroscience.
[19] Andreas Griewank,et al. Evaluating derivatives - principles and techniques of algorithmic differentiation, Second Edition , 2000, Frontiers in applied mathematics.
[20] Yoshua Bengio,et al. Dendritic cortical microcircuits approximate the backpropagation algorithm , 2018, NeurIPS.
[21] Wolfgang Maass,et al. A solution to the learning dilemma for recurrent networks of spiking neurons , 2019, Nature Communications.
[22] Tobi Delbrück,et al. Training Deep Spiking Neural Networks Using Backpropagation , 2016, Front. Neurosci..
[23] Yoshua Bengio,et al. Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.
[24] Friedemann Zenke,et al. The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks , 2020, bioRxiv.
[25] Federico Corradi,et al. Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks , 2020, ICONS.
[26] Timothy P Lillicrap,et al. Towards deep learning with segregated dendrites , 2016, eLife.
[27] Everton J. Agnes,et al. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks , 2015, Nature Communications.
[28] Siddharth Joshi,et al. Memory-Efficient Synaptic Connectivity for Spike-Timing- Dependent Plasticity , 2019, Front. Neurosci..
[29] Rodney J. Douglas,et al. A pulse-coded communications infrastructure for neuromorphic systems , 1999 .
[30] Barak A. Pearlmutter,et al. Automatic differentiation in machine learning: a survey , 2015, J. Mach. Learn. Res..
[31] Gert Cauwenberghs,et al. Deep Supervised Learning Using Local Errors , 2017, Front. Neurosci..
[32] E. Izhikevich. Solving the distal reward problem through linkage of STDP and dopamine signaling , 2007, BMC Neuroscience.
[33] Giacomo Indiveri,et al. A Systematic Method for Configuring VLSI Networks of Spiking Neurons , 2011, Neural Computation.
[34] Johannes Schemmel,et al. An accelerated analog neuromorphic hardware system emulating NMDA- and calcium-based non-linear dendrites , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).
[35] Evangelos Eleftheriou,et al. Online spatio-temporal learning in deep neural networks , 2020, ArXiv.
[36] Shih-Chii Liu,et al. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification , 2017, Front. Neurosci..
[37] Wulfram Gerstner,et al. Neuronal Dynamics: From Single Neurons To Networks And Models Of Cognition , 2014 .
[38] John Salvatier,et al. Theano: A Python framework for fast computation of mathematical expressions , 2016, ArXiv.
[39] Hesham Mostafa,et al. Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks , 2019, IEEE Signal Processing Magazine.
[40] Michael Carbin,et al. The Lottery Ticket Hypothesis: Training Pruned Neural Networks , 2018, ArXiv.
[41] Philip H. S. Torr,et al. SNIP: Single-shot Network Pruning based on Connection Sensitivity , 2018, ICLR.
[42] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[43] Wilten Nicola,et al. Supervised learning in spiking neural networks with FORCE training , 2016, Nature Communications.
[44] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[45] Carver A. Mead,et al. Neuromorphic electronic systems , 1990, Proc. IEEE.
[46] Surya Ganguli,et al. A deep learning framework for neuroscience , 2019, Nature Neuroscience.
[47] David A. Patterson,et al. In-datacenter performance analysis of a tensor processing unit , 2017, 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA).
[48] Osvaldo Simeone,et al. An Introduction to Probabilistic Spiking Neural Networks: Probabilistic Models, Learning Rules, and Applications , 2019, IEEE Signal Processing Magazine.
[49] Robert Gütig,et al. Spiking neurons can discover predictive features by aggregate-label learning , 2016, Science.
[50] Osvaldo Simeone,et al. VOWEL: A Local Online Learning Rule for Recurrent Networks of Probabilistic Spiking Winner- Take-All Circuits , 2020, 2020 25th International Conference on Pattern Recognition (ICPR).
[51] Diederik P. Kingma,et al. GPU Kernels for Block-Sparse Weights , 2017 .
[52] Peter Sterling,et al. Principles of Neural Design , 2015 .
[53] Geoffrey E. Hinton,et al. A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..
[54] John Wawrzynek,et al. Silicon Auditory Processors as Computer Peripherals , 1992, NIPS.
[55] Colin J. Akerman,et al. Random synaptic feedback weights support error backpropagation for deep learning , 2016, Nature Communications.
[56] Surya Ganguli,et al. Improved multitask learning through synaptic intelligence , 2017, ArXiv.
[57] Garrick Orchard,et al. SLAYER: Spike Layer Error Reassignment in Time , 2018, NeurIPS.
[58] Yali Amit,et al. Deep Learning With Asymmetric Connections and Hebbian Updates , 2018, Front. Comput. Neurosci..
[59] Pierre Yger,et al. Slow feature analysis with spiking neurons and its application to audio stimuli , 2016, Journal of Computational Neuroscience.
[60] Osvaldo Simeone,et al. An Introduction to Probabilistic Spiking Neural Networks. , 2019 .
[61] Giacomo Indiveri,et al. A Scalable Multicore Architecture With Heterogeneous Memory Structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs) , 2017, IEEE Transactions on Biomedical Circuits and Systems.
[62] Emre O. Neftci,et al. Data and Power Efficient Intelligence with Neuromorphic Learning Machines , 2018, iScience.
[63] Johannes Schemmel,et al. A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems , 2010, Biological Cybernetics.
[64] Léon Bottou,et al. Sn: A simulator for connectionist models , 1988 .
[65] Sander M. Bohte,et al. SpikeProp: backpropagation for networks of spiking neurons , 2000, ESANN.
[66] Erich Elsen,et al. Rigging the Lottery: Making All Tickets Winners , 2020, ICML.
[67] Pete Warden,et al. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition , 2018, ArXiv.
[68] Terrence J. Sejnowski,et al. Simple framework for constructing functional spiking recurrent neural networks , 2019, Proceedings of the National Academy of Sciences.
[69] W. Gerstner,et al. Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules , 2016, Front. Neural Circuits.
[70] T. Toyoizumi,et al. Learning with three factors: modulating Hebbian plasticity with errors , 2017, Current Opinion in Neurobiology.
[71] Kyunghyun Cho,et al. A Unified Framework of Online Learning Algorithms for Training Recurrent Neural Networks , 2019, J. Mach. Learn. Res..
[72] Andrew S. Cassidy,et al. A million spiking-neuron integrated circuit with a scalable communication network and interface , 2014, Science.
[73] Michael Pfeiffer,et al. Deep Learning With Spiking Neurons: Opportunities and Challenges , 2018, Front. Neurosci..
[74] Gert Cauwenberghs,et al. Neuromorphic Silicon Neuron Circuits , 2011, Front. Neurosci.
[75] Giacomo Indiveri,et al. A current-mode conductance-based silicon neuron for address-event neuromorphic systems , 2009, 2009 IEEE International Symposium on Circuits and Systems.
[76] Michael Carbin,et al. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.
[77] Surya Ganguli,et al. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks , 2017, Neural Computation.
[78] Pierre Baldi,et al. Learning in the machine: Random backpropagation and the deep learning channel , 2016, Artif. Intell..
[79] Gert Cauwenberghs,et al. Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[80] Romain Brette,et al. Neuroinformatics Original Research Article Brian: a Simulator for Spiking Neural Networks in Python , 2022 .
[81] Classifying Images with Few Spikes per Neuron , 2020, ArXiv.
[82] Adam Santoro,et al. Backpropagation and the brain , 2020, Nature Reviews Neuroscience.
[83] Ronald J. Williams,et al. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.
[84] Wulfram Gerstner,et al. Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of NeoHebbian Three-Factor Learning Rules , 2018, Front. Neural Circuits.
[85] Daniel J. Amit,et al. Modeling brain function: the world of attractor neural networks, 1st Edition , 1989 .
[86] André Grüning,et al. Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks , 2015, Neural Computation.
[87] Kilian Q. Weinberger,et al. CondenseNet: An Efficient DenseNet Using Learned Group Convolutions , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[88] Mike E. Davies,et al. Benchmarks for progress in neuromorphic computing , 2019, Nature Machine Intelligence.
[89] Arild Nøkland,et al. Training Neural Networks with Local Error Signals , 2019, ICML.
[90] Erich Elsen,et al. Efficient Neural Audio Synthesis , 2018, ICML.
[91] J. F. Kolen,et al. Backpropagation without weight transport , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).
[92] Wolfgang Maass,et al. Optimized spiking neurons can classify images with high accuracy through temporal coding with two spikes , 2020, Nature Machine Intelligence.
[93] Xiaohui Xie,et al. Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks , 2003, Neural Computation.
[94] Malu Zhang,et al. An Efficient and Perceptually Motivated Auditory Neural Encoding and Decoding Algorithm for Spiking Neural Networks , 2019, Frontiers in Neuroscience.
[95] Timothée Masquelier,et al. Deep Learning in Spiking Neural Networks , 2018, Neural Networks.
[96] Nicholas T. Carnevale,et al. Simulation of networks of spiking neurons: A review of tools and strategies , 2006, Journal of Computational Neuroscience.
[97] Gert Cauwenberghs,et al. Large-Scale Neuromorphic Spiking Array Processors: A Quest to Mimic the Brain , 2018, Front. Neurosci..
[98] Bjorn De Sutter,et al. Dynamic Automatic Differentiation of GPU Broadcast Kernels , 2018, NIPS 2018.
[99] Sander M. Bohte,et al. Efficient Computation in Adaptive Artificial Spiking Neural Networks , 2017, ArXiv.
[100] Friedemann Zenke,et al. Finding trainable sparse networks through Neural Tangent Transfer , 2020, ICML.
[101] Stefan Wermter,et al. Continual Lifelong Learning with Neural Networks: A Review , 2019, Neural Networks.
[102] André Grüning,et al. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding , 2016, PloS one.
[103] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[104] Moritz Hardt,et al. Stable Recurrent Models , 2018, ICLR.
[105] U. Bhalla. Molecular computation in neurons: a modeling perspective , 2014, Current Opinion in Neurobiology.
[106] Robert A. Legenstein,et al. Long short-term memory and Learning-to-learn in networks of spiking neurons , 2018, NeurIPS.
[107] Carver Mead,et al. Analog VLSI and neural systems , 1989 .
[108] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[109] Wulfram Gerstner,et al. Limits to high-speed simulations of spiking neural networks using general-purpose computers , 2014, Front. Neuroinform..
[110] Daniel L. K. Yamins,et al. Pruning neural networks without any data by iteratively conserving synaptic flow , 2020, NeurIPS.
[111] W. Gerstner,et al. Temporal whitening by power-law adaptation in neocortical neurons , 2013, Nature Neuroscience.
[112] Jacques Kaiser,et al. Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE) , 2018, Frontiers in Neuroscience.
[113] James M Murray,et al. Local online learning in recurrent networks with random feedback , 2018, bioRxiv.
[114] Alex Graves,et al. Supervised Sequence Labelling , 2012 .
[115] Henry Kennedy,et al. A Predictive Network Model of Cerebral Cortical Connectivity Based on a Distance Rule , 2013, Neuron.
[116] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[117] Terrence J. Sejnowski,et al. Gradient Descent for Spiking Neural Networks , 2017, NeurIPS.
[118] A. Litwin-Kumar,et al. Formation and maintenance of neuronal assemblies through synaptic plasticity , 2014, Nature Communications.
[119] Hong Wang,et al. Loihi: A Neuromorphic Manycore Processor with On-Chip Learning , 2018, IEEE Micro.
[120] David Kappel,et al. Deep Rewiring: Training very sparse deep networks , 2017, ICLR.
[121] Wulfram Gerstner,et al. Stochastic variational learning in recurrent spiking networks , 2014, Front. Comput. Neurosci..
[122] Angelika Steger,et al. Approximating Real-Time Recurrent Learning with Random Kronecker Factors , 2018, NeurIPS.
[123] Alex Graves,et al. Supervised Sequence Labelling with Recurrent Neural Networks , 2012, Studies in Computational Intelligence.
[124] Yannik Stradmann,et al. Training spiking multi-layer networks with surrogate gradients on an analog neuromorphic substrate , 2020, ArXiv.
[125] Eugene M. Izhikevich,et al. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting , 2006 .
[126] Antoine Dupret,et al. SpikeGrad: An ANN-equivalent Computation Model for Implementing Backpropagation with Spikes , 2019, ICLR.
[127] L. F. Abbott,et al. Generating Coherent Patterns of Activity from Chaotic Neural Networks , 2009, Neuron.
[128] James Martens,et al. On the Variance of Unbiased Online Recurrent Optimization , 2019, ArXiv.
[129] Evangelos Eleftheriou,et al. Deep learning incorporating biologically inspired neural dynamics and in-memory computing , 2020, Nature Machine Intelligence.
[130] Yann Ollivier,et al. Unbiased Online Recurrent Optimization , 2017, ICLR.
[131] Peter C. Humphreys,et al. Deep Learning without Weight Transport , 2019, NeurIPS.