BinaryConnect: Training Deep Neural Networks with binary weights during propagations
暂无分享,去创建一个
Yoshua Bengio | Jean-Pierre David | Matthieu Courbariaux | Yoshua Bengio | Matthieu Courbariaux | J. David
[1] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o(1/k^2) , 1983 .
[2] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[3] Yoshua Bengio,et al. A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..
[4] Tom Minka,et al. Expectation Propagation for approximate Bayesian inference , 2001, UAI.
[5] Kassem Kalach,et al. Hardware Complexity of Modular Multiplication and Exponentiation , 2007, IEEE Transactions on Computers.
[6] Kunle Olukotun,et al. A highly scalable Restricted Boltzmann Machine FPGA implementation , 2009, 2009 International Conference on Field Programmable Logic and Applications.
[7] Rajat Raina,et al. Large-scale deep unsupervised learning using graphics processors , 2009, ICML '09.
[8] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[9] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[10] Alex Graves,et al. Practical Variational Inference for Neural Networks , 2011, NIPS.
[11] Yoshua Bengio,et al. Deep Sparse Rectifier Neural Networks , 2011, AISTATS.
[12] Marc'Aurelio Ranzato,et al. Large Scale Distributed Deep Networks , 2012, NIPS.
[13] Razvan Pascanu,et al. Theano: new features and speed improvements , 2012, ArXiv.
[14] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[15] Tara N. Sainath,et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition , 2012 .
[16] Yann LeCun,et al. Regularization of Neural Networks using DropConnect , 2013, ICML.
[17] Yichuan Tang,et al. Deep Learning using Linear Support Vector Machines , 2013, 1306.0239.
[18] Tara N. Sainath,et al. Deep convolutional neural networks for LVCSR , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[19] Nitish Srivastava,et al. Improving Neural Networks with Dropout , 2013 .
[20] Ian J. Goodfellow,et al. Pylearn2: a machine learning research library , 2013, ArXiv.
[21] Yoshua Bengio,et al. Maxout Networks , 2013, ICML.
[22] Richard M. Schwartz,et al. Fast and Robust Neural Network Joint Models for Statistical Machine Translation , 2014, ACL.
[23] Yoshua Bengio,et al. Training deep neural networks with low precision multiplications , 2014 .
[24] Ninghui Sun,et al. DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning , 2014, ASPLOS.
[25] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[26] Jia Wang,et al. DaDianNao: A Machine-Learning Supercomputer , 2014, 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture.
[27] Qiang Chen,et al. Network In Network , 2013, ICLR.
[28] Kyuyeon Hwang,et al. Fixed-point feedforward deep neural network design using weights +1, 0, and −1 , 2014, 2014 IEEE Workshop on Signal Processing Systems (SiPS).
[29] Benjamin Graham,et al. Spatially-sparse convolutional neural networks , 2014, ArXiv.
[30] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[31] Wonyong Sung,et al. X1000 real-time phoneme recognition VLSI using feed-forward deep neural networks , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[32] Ron Meir,et al. Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights , 2014, NIPS.
[33] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[34] Colin Raffel,et al. Lasagne: First release. , 2015 .
[35] Daniel Soudry,et al. Training Binary Multilayer Neural Networks for Image Classification using Expectation Backpropagation , 2015, ArXiv.
[36] T. Sejnowski,et al. Hippocampal Spine Head Sizes Are Highly Precise , 2015, bioRxiv.
[37] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[38] Pritish Narayanan,et al. Deep Learning with Limited Numerical Precision , 2015, ICML.
[39] Yoshua Bengio,et al. Low precision arithmetic for deep learning , 2014, ICLR.
[40] Giacomo Indiveri,et al. Rounding Methods for Neural Networks with Low Resolution Synaptic Weights , 2015, ArXiv.
[41] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[42] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[43] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[44] Zhuowen Tu,et al. Deeply-Supervised Nets , 2014, AISTATS.
[45] Jean-Philippe Vert. Large-Scale Machine Learning , 2020, Mining of Massive Datasets.