暂无分享,去创建一个
Kushal Datta | Vivek Menon | Vivek V. Menon | Vikram Saletore | Vamsi Sripathi | Aishwarya Bhandare | Deepthi Karkada | Sun Choi | V. Saletore | Deepthi Karkada | Vamsi Sripathi | Aishwarya Bhandare | Sun Choi | Kushal Datta
[1] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[2] James T. Kwok,et al. Loss-aware Weight Quantization of Deep Networks , 2018, ICLR.
[3] Pradeep Dubey,et al. Ternary Neural Networks with Fine-Grained Quantization , 2017, ArXiv.
[4] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[5] Jian Cheng,et al. Quantized Convolutional Neural Networks for Mobile Devices , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] D. W. Barron. Machine Translation , 1968, Nature.
[7] Huaiyu Zhu. On Information and Sufficiency , 1997 .
[8] Yoshua Bengio,et al. BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 , 2016, ArXiv.
[9] Tara N. Sainath,et al. State-of-the-Art Speech Recognition with Sequence-to-Sequence Models , 2017, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[10] Yuxing Peng,et al. Mnemonic Reader for Machine Comprehension , 2017, ArXiv.
[11] Eunhyeok Park,et al. Value-aware Quantization for Training and Inference of Neural Networks , 2018, ECCV.
[12] Shuang Wu,et al. Training and Inference with Integers in Deep Neural Networks , 2018, ICLR.
[13] Soheil Ghiasi,et al. Ristretto: A Framework for Empirical Study of Resource-Efficient Inference in Convolutional Neural Networks , 2018, IEEE Transactions on Neural Networks and Learning Systems.
[14] Tarek M. Taha,et al. Effective Quantization Approaches for Recurrent Neural Networks , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).
[15] Percy Liang,et al. Generating Sentences by Editing Prototypes , 2017, TACL.