Training LSTM Networks With Resistive Cross-Point Devices
暂无分享,去创建一个
[1] Farnood Merrikh-Bayat,et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors , 2014, Nature.
[2] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[3] Suyog Gupta,et al. Model Accuracy and Runtime Tradeoff in Distributed Deep Learning: A Systematic Study , 2015, 2016 IEEE 16th International Conference on Data Mining (ICDM).
[4] Trishul M. Chilimbi,et al. Project Adam: Building an Efficient and Scalable Deep Learning Training System , 2014, OSDI.
[5] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[6] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[7] 武田 一哉,et al. Recurrent Neural Networkに基づく日常生活行動認識 , 2016 .
[8] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[9] Gökmen Tayfun,et al. Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations , 2016, Front. Neurosci..
[10] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[11] Marc'Aurelio Ranzato,et al. Large Scale Distributed Deep Networks , 2012, NIPS.
[12] Zachary Chase Lipton. A Critical Review of Recurrent Neural Networks for Sequence Learning , 2015, ArXiv.
[13] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[14] Yonghui Wu,et al. Exploring the Limits of Language Modeling , 2016, ArXiv.
[15] Hyung-Min Lee,et al. Analog CMOS-based resistive processing unit for deep neural network training , 2017, 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS).
[16] Fei-Fei Li,et al. Deep visual-semantic alignments for generating image descriptions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[18] Yoshua Bengio,et al. Gated Feedback Recurrent Neural Networks , 2015, ICML.
[19] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[20] Wojciech Zaremba,et al. Recurrent Neural Network Regularization , 2014, ArXiv.
[21] Pritish Narayanan,et al. Equivalent-accuracy accelerated neural-network training using analogue memory , 2018, Nature.
[22] Fei-Fei Li,et al. Visualizing and Understanding Recurrent Networks , 2015, ArXiv.
[23] Shimeng Yu,et al. Mitigating effects of non-ideal synaptic device characteristics for on-chip learning , 2015, 2015 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).
[24] Yoshua Bengio,et al. A network of deep neural networks for Distant Speech Recognition , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[25] Avinash Sodani,et al. Knights landing (KNL): 2nd Generation Intel® Xeon Phi processor , 2015, 2015 IEEE Hot Chips 27 Symposium (HCS).
[26] Pritish Narayanan,et al. Neuromorphic computing using non-volatile memory , 2017 .
[27] Tayfun Gokmen,et al. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices , 2017, Front. Neurosci..
[28] Sapan Agarwal,et al. Li‐Ion Synaptic Transistor for Low Power Analog Computing , 2017, Advanced materials.
[29] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[30] Ojas Parekh,et al. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding , 2016, Front. Neurosci..
[31] Jason Weston,et al. Natural Language Processing (Almost) from Scratch , 2011, J. Mach. Learn. Res..
[32] E. Leobandung,et al. Capacitor-based Cross-point Array for Analog Neural Network with Record Symmetry and Linearity , 2018, 2018 IEEE Symposium on VLSI Technology.
[33] Pritish Narayanan,et al. Deep Learning with Limited Numerical Precision , 2015, ICML.
[34] David A. Patterson,et al. In-datacenter performance analysis of a tensor processing unit , 2017, 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA).
[35] Thomas S. Huang,et al. Dilated Recurrent Neural Networks , 2017, NIPS.
[36] Tara N. Sainath,et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups , 2012, IEEE Signal Processing Magazine.
[37] Tao Wang,et al. Deep learning with COTS HPC systems , 2013, ICML.
[38] Y. Leblebici,et al. Large-scale neural networks implemented with non-volatile memory as the synaptic weight element: Comparative performance analysis (accuracy, speed, and power) , 2015, 2015 IEEE International Electron Devices Meeting (IEDM).
[39] Shuicheng Yan,et al. Dual Path Networks , 2017, NIPS.