暂无分享,去创建一个
[1] Yoshua Bengio,et al. On Using Very Large Target Vocabulary for Neural Machine Translation , 2014, ACL.
[2] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[3] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[4] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[5] Ian McGraw,et al. On the compression of recurrent neural networks with an application to LVCSR acoustic modeling for embedded speech recognition , 2016, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[6] Christopher D. Manning,et al. Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.
[7] Christopher D. Manning,et al. Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models , 2016, ACL.
[8] Mauro Cettolo,et al. WIT3: Web Inventory of Transcribed and Translated Talks , 2012, EAMT.
[9] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[10] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[11] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[12] Yoshua Bengio,et al. Neural Networks with Few Multiplications , 2015, ICLR.
[13] Phil Blunsom,et al. Recurrent Continuous Translation Models , 2013, EMNLP.
[14] T. Kathirvalavakumar,et al. Pruning algorithms of neural networks — a comparative study , 2013, Central European Journal of Computer Science.
[15] Rico Sennrich,et al. Improving Neural Machine Translation Models with Monolingual Data , 2015, ACL.
[16] David Chiang,et al. Auto-Sizing Neural Networks: With Applications to n-gram Language Models , 2015, EMNLP.
[17] Wojciech Zaremba,et al. Recurrent Neural Network Regularization , 2014, ArXiv.
[18] Quoc V. Le,et al. Addressing the Rare Word Problem in Neural Machine Translation , 2014, ACL.
[19] Andrew Zisserman,et al. Speeding up Convolutional Neural Networks with Low Rank Expansions , 2014, BMVC.
[20] Joan Bruna,et al. Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation , 2014, NIPS.
[21] Yoshua Bengio,et al. Training deep neural networks with low precision multiplications , 2014 .
[22] Pritish Narayanan,et al. Deep Learning with Limited Numerical Precision , 2015, ICML.
[23] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[24] Tara N. Sainath,et al. Learning compact recurrent neural networks , 2016, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[25] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[26] Pushmeet Kohli,et al. Memory Bounded Deep Convolutional Networks , 2014, ArXiv.
[27] Yixin Chen,et al. Compressing Neural Networks with the Hashing Trick , 2015, ICML.
[28] Babak Hassibi,et al. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon , 1992, NIPS.
[29] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[30] Christopher D. Manning,et al. Stanford Neural Machine Translation Systems for Spoken Language Domains , 2015, IWSLT.
[31] Yoshua Bengio,et al. Montreal Neural Machine Translation Systems for WMT’15 , 2015, WMT@EMNLP.
[32] Geoffrey E. Hinton,et al. A Simple Way to Initialize Recurrent Networks of Rectified Linear Units , 2015, ArXiv.
[33] Yoshua Bengio,et al. Low precision arithmetic for deep learning , 2014, ICLR.