Structured Bayesian Pruning via Log-Normal Multiplicative Noise
暂无分享,去创建一个
Dmitry P. Vetrov | Arsenii Ashukha | Kirill Neklyudov | Dmitry Molchanov | D. Vetrov | Kirill Neklyudov | Dmitry Molchanov | Arsenii Ashukha
[1] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[2] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[3] Alex Graves,et al. Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.
[4] Yann LeCun,et al. Regularization of Neural Networks using DropConnect , 2013, ICML.
[5] Christopher D. Manning,et al. Fast dropout training , 2013, ICML.
[6] Andrew Zisserman,et al. Speeding up Convolutional Neural Networks with Low Rank Expansions , 2014, BMVC.
[7] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[8] J. Mooij,et al. Smart Regularization of Deep Architectures , 2015 .
[9] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Alexander Novikov,et al. Tensorizing Neural Networks , 2015, NIPS.
[11] Ariel D. Procaccia,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.
[12] Pushmeet Kohli,et al. PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions , 2015, NIPS.
[13] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[14] Yiran Chen,et al. Learning Structured Sparsity in Deep Neural Networks , 2016, NIPS.
[15] Martín Abadi,et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.
[16] Alexander Novikov,et al. Ultimate tensorization: compressing convolutional and FC layers alike , 2016, ArXiv.
[17] M. Dmitry,et al. Dropout-based Automatic Relevance Determination , 2016, NIPS 2016.
[18] Victor S. Lempitsky,et al. Fast ConvNets Using Group-Wise Brain Damage , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Ali Farhadi,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.
[20] Dmitry P. Vetrov,et al. Variational Dropout Sparsifies Deep Neural Networks , 2017, ICML.
[21] Lobacheva Ekaterina,et al. Bayesian Sparsification of Recurrent Neural Networks. , 2017 .
[22] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[23] Max Welling,et al. Soft Weight-Sharing for Neural Network Compression , 2017, ICLR.
[24] Li Zhang,et al. Spatially Adaptive Computation Time for Residual Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Dmitry P. Vetrov,et al. Bayesian Sparsification of Recurrent Neural Networks , 2017, ArXiv.