暂无分享,去创建一个
[1] Eriko Nurvitadhi,et al. Accelerating Deep Convolutional Networks using low-precision and sparsity , 2016, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[2] Xing Wang,et al. Scalable Compression of Deep Neural Networks , 2016, ACM Multimedia.
[3] Luca Benini,et al. Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations , 2017, NIPS.
[4] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[5] Abhisek Kundu,et al. Mixed Low-precision Deep Learning Inference using Dynamic Fixed Point , 2017, ArXiv.
[6] Vincent Lepetit,et al. Learning Separable Filters , 2013, CVPR.
[7] Xuelong Li,et al. Towards Convolutional Neural Networks Compression via Global Error Reconstruction , 2016, IJCAI.
[8] Igor Carron,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016 .
[9] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[10] Joan Bruna,et al. Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation , 2014, NIPS.
[11] Yoshua Bengio,et al. BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.
[12] Song Han,et al. Trained Ternary Quantization , 2016, ICLR.
[13] Ming Yang,et al. Compressing Deep Convolutional Networks using Vector Quantization , 2014, ArXiv.
[14] Yoshua Bengio,et al. Convergence Properties of the K-Means Algorithms , 1994, NIPS.
[15] Herbert Gish,et al. Asymptotically efficient quantizing , 1968, IEEE Trans. Inf. Theory.
[16] Wonyong Sung,et al. Fixed-point optimization of deep neural networks with adaptive step size retraining , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[17] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[18] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[19] Shuchang Zhou,et al. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients , 2016, ArXiv.
[20] Hao Zhou,et al. Less Is More: Towards Compact CNNs , 2016, ECCV.
[21] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[22] Jack Xin,et al. Training Ternary Neural Networks with Exact Proximal Operator , 2016, ArXiv.
[23] Sherief Reda,et al. Understanding the impact of precision quantization on the accuracy and energy of neural networks , 2016, Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017.
[24] Hanan Samet,et al. Pruning Filters for Efficient ConvNets , 2016, ICLR.
[25] Jungwon Lee,et al. Towards the Limit of Network Quantization , 2016, ICLR.
[26] Yann LeCun,et al. Regularization of Neural Networks using DropConnect , 2013, ICML.
[27] Ming Zhang,et al. Two-Bit Networks for Deep Learning on Resource-Constrained Embedded Devices , 2017, ArXiv.
[28] Wonyong Sung,et al. Learning separable fixed-point kernels for deep convolutional neural networks , 2016, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[29] Shuicheng Yan,et al. Training Skinny Deep Neural Networks with Iterative Hard Thresholding Methods , 2016, ArXiv.
[30] Dharmendra S. Modha,et al. Deep neural networks are robust to weight binarization and other non-linear distortions , 2016, ArXiv.
[31] Christian Gagné,et al. Alternating Direction Method of Multipliers for Sparse Convolutional Neural Networks , 2016, ArXiv.
[32] Yoshua Bengio,et al. Neural Networks with Few Multiplications , 2015, ICLR.
[33] Yu Cao,et al. Reducing the Model Order of Deep Neural Networks Using Information Theory , 2016, 2016 IEEE Computer Society Annual Symposium on VLSI (ISVLSI).
[34] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[35] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[36] Pritish Narayanan,et al. Deep Learning with Limited Numerical Precision , 2015, ICML.
[37] Yixin Chen,et al. Compressing Convolutional Neural Networks , 2015, ArXiv.
[38] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[39] Luca Benini,et al. Soft-to-Hard Vector Quantization for End-to-End Learned Compression of Images and Neural Networks , 2017, ArXiv.
[40] Pushmeet Kohli,et al. Memory Bounded Deep Convolutional Networks , 2014, ArXiv.
[41] Razvan Pascanu,et al. Theano: A CPU and GPU Math Compiler in Python , 2010, SciPy.
[42] Yixin Chen,et al. Compressing Neural Networks with the Hashing Trick , 2015, ICML.
[43] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[44] Jack Xin,et al. Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection , 2016, Journal of Computational Mathematics.
[45] Xiaogang Wang,et al. Convolutional neural networks with low-rank regularization , 2015, ICLR.
[46] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[47] Max Welling,et al. Soft Weight-Sharing for Neural Network Compression , 2017, ICLR.
[48] Wonyong Sung,et al. Resiliency of Deep Neural Networks under Quantization , 2015, ArXiv.
[49] Daisuke Miyashita,et al. Convolutional Neural Networks using Logarithmic Data Representation , 2016, ArXiv.
[50] Natalie D. Enright Jerger,et al. Proteus: Exploiting Numerical Precision Variability in Deep Neural Networks , 2016, ICS.
[51] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.
[52] Ran El-Yaniv,et al. Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations , 2016, J. Mach. Learn. Res..