Fixed-Point Factorized Networks
暂无分享,去创建一个
[1] Daisuke Miyashita,et al. Convolutional Neural Networks using Logarithmic Data Representation , 2016, ArXiv.
[2] Sachin S. Talathi,et al. Fixed Point Quantization of Deep Convolutional Networks , 2015, ICML.
[3] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[4] Kyuyeon Hwang,et al. Fixed-point feedforward deep neural network design using weights +1, 0, and −1 , 2014, 2014 IEEE Workshop on Signal Processing Systems (SiPS).
[5] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[6] Tamara G. Kolda,et al. A semidiscrete matrix decomposition for latent semantic indexing information retrieval , 1998, TOIS.
[7] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[8] Yoshua Bengio,et al. BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 , 2016, ArXiv.
[9] Jian Cheng,et al. Accelerating Convolutional Neural Networks for Mobile Applications , 2016, ACM Multimedia.
[10] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[11] Daniel Soudry,et al. Training Binary Multilayer Neural Networks for Image Classification using Expectation Backpropagation , 2015, ArXiv.
[12] Yoshua Bengio,et al. BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.
[13] Yoshua Bengio,et al. Neural Networks with Few Multiplications , 2015, ICLR.
[14] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[15] Ivan V. Oseledets,et al. Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition , 2014, ICLR.
[16] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[17] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[18] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, NIPS.
[19] Misha Denil,et al. Predicting Parameters in Deep Learning , 2014 .
[20] Andrew Zisserman,et al. Speeding up Convolutional Neural Networks with Low Rank Expansions , 2014, BMVC.
[21] Yann LeCun,et al. Fast Training of Convolutional Networks through FFTs , 2013, ICLR.
[22] Eunhyeok Park,et al. Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications , 2015, ICLR.
[23] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, ArXiv.
[25] Pritish Narayanan,et al. Deep Learning with Limited Numerical Precision , 2015, ICML.
[26] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[27] Yoshua Bengio,et al. Low precision arithmetic for deep learning , 2014, ICLR.
[28] Jian Cheng,et al. Quantized Convolutional Neural Networks for Mobile Devices , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[30] Trevor Darrell,et al. Caffe: Convolutional Architecture for Fast Feature Embedding , 2014, ACM Multimedia.
[31] Tim Dettmers,et al. 8-Bit Approximations for Parallelism in Deep Learning , 2015, ICLR.
[32] Yoshua Bengio,et al. Training deep neural networks with low precision multiplications , 2014 .
[33] Joachim Denzler,et al. ImageNet pre-trained models with batch normalization , 2016, ArXiv.
[34] Paris Smaragdis,et al. Bitwise Neural Networks , 2016, ArXiv.
[35] Jian Sun,et al. Accelerating Very Deep Convolutional Networks for Classification and Detection , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.