Beyond Network Pruning: a Joint Search-and-Training Approach
暂无分享,去创建一个
Xin Li | Guangming Shi | Weisheng Dong | Xiaotong Lu | Han Huang | W. Dong | Guangming Shi | Xin Li | Xiaotong Lu | Han Huang
[1] Song Han,et al. AMC: AutoML for Model Compression and Acceleration on Mobile Devices , 2018, ECCV.
[2] Chong-Min Kyung,et al. Efficient Neural Network Compression , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Quoc V. Le,et al. Efficient Neural Architecture Search via Parameter Sharing , 2018, ICML.
[4] Yiming Hu,et al. A novel channel pruning method for deep neural network compression , 2018, ArXiv.
[5] Hao Zhou,et al. Less Is More: Towards Compact CNNs , 2016, ECCV.
[6] Vijay Vasudevan,et al. Learning Transferable Architectures for Scalable Image Recognition , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[7] Luke Zettlemoyer,et al. Sparse Networks from Scratch: Faster Training without Losing Performance , 2019, ArXiv.
[8] Yi Yang,et al. One-Shot Neural Architecture Search via Self-Evaluated Template Network , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[9] Yi Yang,et al. Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks , 2018, IJCAI.
[10] Niraj K. Jha,et al. ChamNet: Towards Efficient Network Design Through Platform-Aware Model Adaptation , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Yue Wang,et al. Drawing early-bird tickets: Towards more efficient training of deep networks , 2019, ICLR.
[12] Alok Aggarwal,et al. Regularized Evolution for Image Classifier Architecture Search , 2018, AAAI.
[13] Yi Yang,et al. Network Pruning via Transformable Architecture Search , 2019, NeurIPS.
[14] James Zijun Wang,et al. Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers , 2018, ICLR.
[15] Xiangyu Zhang,et al. MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[16] Xuelong Li,et al. Towards Compact ConvNets via Structure-Sparsity Regularized Filter Pruning , 2019, ArXiv.
[17] Mathieu Salzmann,et al. Learning the Number of Neurons in Deep Networks , 2016, NIPS.
[18] Bin Gu,et al. Asynchronous Doubly Stochastic Sparse Kernel Learning , 2018, AAAI.
[19] Song Han,et al. ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware , 2018, ICLR.
[20] Nicholas Rhinehart,et al. N2N Learning: Network to Network Compression via Policy Gradient Reinforcement Learning , 2017, ICLR.
[21] Yi Yang,et al. Searching for a Robust Neural Architecture in Four GPU Hours , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Yiming Yang,et al. DARTS: Differentiable Architecture Search , 2018, ICLR.
[23] Mingjie Sun,et al. Rethinking the Value of Network Pruning , 2018, ICLR.
[24] Hanan Samet,et al. Pruning Filters for Efficient ConvNets , 2016, ICLR.
[25] Ping Liu,et al. Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Michael Carbin,et al. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.
[27] Naiyan Wang,et al. Data-Driven Sparse Structure Selection for Deep Neural Networks , 2017, ECCV.