暂无分享,去创建一个
Geoffrey E. Hinton | Yarin Gal | Kevin Swersky | Aidan N. Gomez | Ivan Zhang | Y. Gal | Kevin Swersky | Ivan Zhang
[1] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[2] Yarin Gal,et al. Uncertainty in Deep Learning , 2016 .
[3] Max Welling,et al. Learning Sparse Neural Networks through L0 Regularization , 2017, ICLR.
[4] Dustin Tran,et al. Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches , 2018, ICLR.
[5] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[6] Max Welling,et al. Soft Weight-Sharing for Neural Network Compression , 2017, ICLR.
[7] Gintare Karolina Dziugaite,et al. The Lottery Ticket Hypothesis at Scale , 2019, ArXiv.
[8] Dmitry P. Vetrov,et al. Variational Dropout Sparsifies Deep Neural Networks , 2017, ICML.
[9] M. Yuan,et al. Model selection and estimation in regression with grouped variables , 2006 .
[10] Yann LeCun,et al. Regularization of Neural Networks using DropConnect , 2013, ICML.
[11] Samy Bengio,et al. Tensor2Tensor for Neural Machine Translation , 2018, AMTA.
[12] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[13] Ryan P. Adams,et al. Learning Ordered Representations with Nested Dropout , 2014, ICML.
[14] Paris Smaragdis,et al. NoiseOut: A Simple Way to Prune Neural Networks , 2016, ArXiv.
[15] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[16] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[17] Babak Hassibi,et al. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon , 1992, NIPS.
[18] Michael Carbin,et al. The Lottery Ticket Hypothesis: Training Pruned Neural Networks , 2018, ArXiv.
[19] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[20] Erich Elsen,et al. The State of Sparsity in Deep Neural Networks , 2019, ArXiv.
[21] Heiga Zen,et al. WaveNet: A Generative Model for Raw Audio , 2016, SSW.
[22] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[23] Ariel D. Procaccia,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.
[24] Nitish Srivastava,et al. Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.
[25] Lucas Theis,et al. Faster gaze prediction with dense networks and Fisher pruning , 2018, ArXiv.
[26] Xu Sun,et al. meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting , 2017, ICML.
[27] Suya You,et al. Learning to Prune Filters in Convolutional Neural Networks , 2018, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).
[28] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Alex Graves,et al. Neural Machine Translation in Linear Time , 2016, ArXiv.
[30] Mingjie Sun,et al. Rethinking the Value of Network Pruning , 2018, ICLR.
[31] Tim Kraska,et al. Smallify: Learning Network Size while Training , 2018, ArXiv.
[32] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Inference , 2016, ICLR.
[33] Alex Kendall,et al. Concrete Dropout , 2017, NIPS.
[34] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[35] Amos J. Storkey,et al. Pruning neural networks: is it time to nip it in the bud? , 2018, ArXiv.
[36] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[37] Huan Wang,et al. Structured Probabilistic Pruning for Convolutional Neural Network Acceleration , 2017, BMVC.
[38] Xin Dong,et al. Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon , 2017, NIPS.
[39] Amos Storkey,et al. A Closer Look at Structured Pruning for Neural Network Compression , 2018 .
[40] Brendan J. Frey,et al. Adaptive dropout for training deep neural networks , 2013, NIPS.
[41] Yiran Chen,et al. Learning Structured Sparsity in Deep Neural Networks , 2016, NIPS.
[42] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[43] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[44] Vittorio Murino,et al. Excitation Dropout: Encouraging Plasticity in Deep Neural Networks , 2018, ArXiv.