XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
暂无分享,去创建一个
Ali Farhadi | Mohammad Rastegari | Vicente Ordonez | Joseph Redmon | Joseph Redmon | Ali Farhadi | Mohammad Rastegari | Vicente Ordonez
[1] M. Gottmer. Merging Reality and Virtuality with Microsoft HoloLens , 2015 .
[2] Babak Hassibi,et al. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon , 1992, NIPS.
[3] Yoshua Bengio,et al. Big Neural Networks Waste Capacity , 2013, ICLR.
[4] Yixin Chen,et al. Compressing Neural Networks with the Hashing Trick , 2015, ICML.
[5] Ron Meir,et al. Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights , 2014, NIPS.
[6] Yann LeCun,et al. Regularization of Neural Networks using DropConnect , 2013, ICML.
[7] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[8] George Cybenko,et al. Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..
[9] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[10] Shaohua Kevin Zhou,et al. Cross-Domain Synthesis of Medical Images Using Efficient Location-Sensitive Deep Network , 2015, MICCAI.
[11] Qiang Chen,et al. Network In Network , 2013, ICLR.
[12] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[13] Trevor Darrell,et al. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[14] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[15] Yoshua Bengio,et al. BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 , 2016, ArXiv.
[16] Dong Yu,et al. Conversational Speech Transcription Using Context-Dependent Deep Neural Networks , 2012, ICML.
[17] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[18] Ross B. Girshick,et al. Fast R-CNN , 2015, 1504.08083.
[19] Joan Bruna,et al. Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation , 2014, NIPS.
[20] Yoshua Bengio,et al. BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.
[21] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[22] Michael I. Jordan,et al. Advances in Neural Information Processing Systems 30 , 1995 .
[23] Andrew Zisserman,et al. Speeding up Convolutional Neural Networks with Low Rank Expansions , 2014, BMVC.
[24] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[25] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[26] Trevor Darrell,et al. Fully Convolutional Networks for Semantic Segmentation , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[27] Wonyong Sung,et al. Fixed point optimization of deep convolutional neural networks for object recognition , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[28] Ming Yang,et al. Compressing Deep Convolutional Networks using Vector Quantization , 2014, ArXiv.
[29] Vincent Vanhoucke,et al. Improving the speed of neural networks on CPUs , 2011 .
[30] Dharmendra S. Modha,et al. Backpropagation for Energy-Efficient Neuromorphic Computing , 2015, NIPS.
[31] Paris Smaragdis,et al. Bitwise Neural Networks , 2016, ArXiv.
[32] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[33] Carlo Baldassi,et al. Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses. , 2015, Physical review letters.
[34] Yoshua Bengio,et al. Training deep neural networks with low precision multiplications , 2014 .
[35] Kyuyeon Hwang,et al. Fixed-point feedforward deep neural network design using weights +1, 0, and −1 , 2014, 2014 IEEE Workshop on Signal Processing Systems (SiPS).
[36] Yoshua Bengio,et al. Neural Networks with Few Multiplications , 2015, ICLR.
[37] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[38] Misha Denil,et al. Predicting Parameters in Deep Learning , 2014 .
[39] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[40] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[41] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[42] Lorien Y. Pratt,et al. Comparing Biases for Minimal Network Construction with Back-Propagation , 1988, NIPS.
[43] Aditya Bhaskara,et al. Provable Bounds for Learning Some Deep Representations , 2013, ICML.