暂无分享,去创建一个
Charbel Sakr | Naresh R. Shanbhag | Ankur Agrawal | Kailash Gopalakrishnan | Jungwook Choi | Chia-Yu Chen | Naigang Wang | Naresh R Shanbhag | A. Agrawal | K. Gopalakrishnan | Chia-Yu Chen | Jungwook Choi | Naigang Wang | Charbel Sakr
[1] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[2] Swagath Venkataramani,et al. Exploiting approximate computing for deep learning acceleration , 2018, 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE).
[3] Hao Wu,et al. Mixed Precision Training , 2017, ICLR.
[4] Sachin S. Talathi,et al. Fixed Point Quantization of Deep Convolutional Networks , 2015, ICML.
[5] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, ArXiv.
[6] Shuang Wu,et al. Training and Inference with Integers in Deep Neural Networks , 2018, ICLR.
[7] Dharmendra S. Modha,et al. Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference , 2018, ArXiv.
[8] Charbel Sakr,et al. Analytical Guarantees on Numerical Precision of Deep Neural Networks , 2017, ICML.
[9] Pritish Narayanan,et al. Deep Learning with Limited Numerical Precision , 2015, ICML.
[10] Nicholas J. Higham,et al. The Accuracy of Floating Point Summation , 1993, SIAM J. Sci. Comput..
[11] R. C. Whaley,et al. Reducing Floating Point Error in Dot Product Using the Superblock Family of Algorithms , 2008, SIAM J. Sci. Comput..
[12] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[13] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.
[14] Bernard Widrow,et al. Quantization Noise: A Few Properties of Selected Distributions , 2008 .
[15] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[16] Stuart C. Schwartz,et al. Best “ordering” for floating-point addition , 1988, TOMS.
[17] Song Han,et al. Trained Ternary Quantization , 2016, ICLR.
[18] Xavier Gastaldi,et al. Shake-Shake regularization , 2017, ArXiv.
[19] Shuchang Zhou,et al. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients , 2016, ArXiv.
[20] Daniel Brand,et al. Training Deep Neural Networks with 8-bit Floating Point Numbers , 2018, NeurIPS.
[21] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[22] Xin Wang,et al. Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks , 2017, NIPS.