暂无分享,去创建一个
[1] Pushmeet Kohli,et al. Memory Bounded Deep Convolutional Networks , 2014, ArXiv.
[2] Dmitry P. Vetrov,et al. Variational Dropout Sparsifies Deep Neural Networks , 2017, ICML.
[3] Max Welling,et al. Bayesian Compression for Deep Learning , 2017, NIPS.
[4] Yurong Chen,et al. Dynamic Network Surgery for Efficient DNNs , 2016, NIPS.
[5] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[6] David P. Wipf,et al. Compressing Neural Networks using the Variational Information Bottleneck , 2018, ICML.
[7] Peter Stone,et al. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science , 2017, Nature Communications.
[8] Ariel D. Procaccia,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.
[9] Nikko Strom,et al. Sparse connection and pruning in large dynamic artificial neural networks. , 1997 .
[10] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[11] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[12] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[13] Yang Yang,et al. Deep Learning Scaling is Predictable, Empirically , 2017, ArXiv.
[14] Zhiqiang Shen,et al. Learning Efficient Convolutional Networks through Network Slimming , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[15] T. J. Mitchell,et al. Bayesian Variable Selection in Linear Regression , 1988 .
[16] Myle Ott,et al. Scaling Neural Machine Translation , 2018, WMT.
[17] Heiga Zen,et al. WaveNet: A Generative Model for Raw Audio , 2016, SSW.
[18] Erich Elsen,et al. Efficient Neural Audio Synthesis , 2018, ICML.
[19] Song Han,et al. AMC: AutoML for Model Compression and Acceleration on Mobile Devices , 2018, ECCV.
[20] Jianxin Wu,et al. ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[21] Lucas Theis,et al. Faster gaze prediction with dense networks and Fisher pruning , 2018, ArXiv.
[22] Jiwen Lu,et al. Runtime Neural Pruning , 2017, NIPS.
[23] Mingjie Sun,et al. Rethinking the Value of Network Pruning , 2018, ICLR.
[24] Erich Elsen,et al. Exploring Sparsity in Recurrent Neural Networks , 2017, ICLR.
[25] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[27] Jan Skoglund,et al. LPCNET: Improving Neural Speech Synthesis through Linear Prediction , 2018, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[28] Michael Carbin,et al. The Lottery Ticket Hypothesis: Training Pruned Neural Networks , 2018, ArXiv.
[29] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[30] Max Welling,et al. Learning Sparse Neural Networks through L0 Regularization , 2017, ICLR.
[31] Babak Hassibi,et al. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon , 1992, NIPS.
[32] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Transfer Learning , 2016, ArXiv.