暂无分享,去创建一个
Myron Flickner | Dharmendra S. Modha | Rathinakumar Appuswamy | John V. Arthur | Steven K. Esser | Timothy Melano | Paul Merolla | Jeffrey L. McKinstry | Tapan K. Nayak | D. Modha | P. Merolla | R. Appuswamy | J. Arthur | M. Flickner | T. Nayak | J. McKinstry | T. Melano
[1] Jongkil Park,et al. A 65k-neuron 73-Mevents/s 22-pJ/event asynchronous micro-pipelined integrate-and-fire array transceiver , 2014, 2014 IEEE Biomedical Circuits and Systems Conference (BioCAS) Proceedings.
[2] Yixin Chen,et al. Compressing Neural Networks with the Hashing Trick , 2015, ICML.
[3] Ali Farhadi,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.
[4] Myron Flickner,et al. Compass: A scalable simulator for an architecture for cognitive computing , 2012, 2012 International Conference for High Performance Computing, Networking, Storage and Analysis.
[5] Giacomo Indiveri,et al. An Event-Based Neural Network Architecture With an Asynchronous Programmable Synaptic Memory , 2014, IEEE Transactions on Biomedical Circuits and Systems.
[6] Misha Denil,et al. ACDC: A Structured Efficient Linear Layer , 2015, ICLR.
[7] Ila R Fiete,et al. Gradient learning in spiking neural networks by dynamic perturbation of conductances. , 2006, Physical review letters.
[8] Yoshua Bengio,et al. BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.
[9] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[10] Song Han,et al. EIE: Efficient Inference Engine on Compressed Deep Neural Network , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).
[11] Andrew S. Cassidy,et al. Cognitive computing programming paradigm: A Corelet Language for composing networks of neurosynaptic cores , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).
[12] Tara N. Sainath,et al. Auto-encoder bottleneck features using deep belief networks , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[13] Léon Bottou,et al. Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.
[14] Nitish Srivastava,et al. Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.
[15] Jim D. Garside,et al. SpiNNaker: A 1-W 18-Core System-on-Chip for Massively-Parallel Neural Network Simulation , 2013, IEEE Journal of Solid-State Circuits.
[16] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[17] Eunhyeok Park,et al. Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications , 2015, ICLR.
[18] W. Oettli. Capacity-Achieving Input Distributions for So . me Ampli tude-Limi ted Channels with Additive Noise , .
[19] Geoffrey E. Hinton,et al. A View of the Em Algorithm that Justifies Incremental, Sparse, and other Variants , 1998, Learning in Graphical Models.
[20] Jason Maassen,et al. Optimizing convolution operations on GPUs using adaptive tiling , 2014, Future Gener. Comput. Syst..
[21] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, ArXiv.
[22] Joel G. Smith,et al. The Information Capacity of Amplitude- and Variance-Constrained Scalar Gaussian Channels , 1971, Inf. Control..
[23] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[24] New York Dover,et al. ON THE CONVERGENCE PROPERTIES OF THE EM ALGORITHM , 1983 .
[25] Misha Denil,et al. Noisy Activation Functions , 2016, ICML.
[26] Jack J. Dongarra,et al. Guest Editors Introduction to the top 10 algorithms , 2000, Comput. Sci. Eng..
[27] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[28] Cristian Sminchisescu,et al. Matrix Backpropagation for Deep Networks with Structured Layers , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[29] Andrea Vedaldi,et al. MatConvNet: Convolutional Neural Networks for MATLAB , 2014, ACM Multimedia.
[30] Christoforos E. Kozyrakis,et al. Convolution engine: balancing efficiency & flexibility in specialized computing , 2013, ISCA.
[31] D. Rubin,et al. Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .
[32] Frantisek Grézl,et al. Optimizing bottle-neck features for lvcsr , 2008, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing.
[33] Andrew S. Cassidy,et al. Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).
[34] Yoshua Bengio,et al. BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 , 2016, ArXiv.
[35] Dahua Lin,et al. Adjustable Bounded Rectifiers: Towards Deep Binary Representations , 2015, ArXiv.
[36] Henry Markram,et al. On the computational power of circuits of spiking neurons , 2004, J. Comput. Syst. Sci..
[37] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[38] Andrew S. Cassidy,et al. A million spiking-neuron integrated circuit with a scalable communication network and interface , 2014, Science.
[39] Ming Yang,et al. Compressing Deep Convolutional Networks using Vector Quantization , 2014, ArXiv.
[40] Andrew S. Cassidy,et al. Convolutional networks for fast, energy-efficient neuromorphic computing , 2016, Proceedings of the National Academy of Sciences.
[41] Johannes Schemmel,et al. Six Networks on a Universal Neuromorphic Computing Substrate , 2012, Front. Neurosci..
[42] Tara N. Sainath,et al. Structured Transforms for Small-Footprint Deep Learning , 2015, NIPS.
[43] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[44] John Tran,et al. cuDNN: Efficient Primitives for Deep Learning , 2014, ArXiv.
[45] Edward N. Wilson. Backpropagation Learning for Systems with Discrete-Valued Functions , 1998 .
[46] Steven G. Johnson,et al. The Design and Implementation of FFTW3 , 2005, Proceedings of the IEEE.
[47] Rohit Prabhavalkar,et al. Compressing deep neural networks using a rank-constrained topology , 2015, INTERSPEECH.
[48] Robert M. Gray,et al. Toeplitz and Circulant Matrices: A Review , 2005, Found. Trends Commun. Inf. Theory.
[49] Robert M. Gray,et al. Toeplitz And Circulant Matrices: A Review (Foundations and Trends(R) in Communications and Information Theory) , 2006 .
[50] Ebru Arisoy,et al. Low-rank matrix factorization for Deep Neural Network training with high-dimensional output targets , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[51] Michael Schmuker,et al. A neuromorphic network for generic multivariate data classification , 2014, Proceedings of the National Academy of Sciences.
[52] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[53] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.