暂无分享,去创建一个
Zhengping Che | Xiaolong Ma | Sijia Liu | Yanzhi Wang | Ning Liu | Qing Jin | Geng Yuan | Xuan Shen | Jian Ren | Jian Tang | Yanzhi Wang | Geng Yuan | Xiaolong Ma | Jian Tang | Sijia Liu | Jian Ren | Zhengping Che | Xuan Shen | Qing Jin | Ning Liu
[1] Niraj K. Jha,et al. NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm , 2017, IEEE Transactions on Computers.
[2] Shiyu Chang,et al. The Lottery Ticket Hypothesis for Pre-trained BERT Networks , 2020, NeurIPS.
[3] Daniel L. K. Yamins,et al. Pruning neural networks without any data by iteratively conserving synaptic flow , 2020, NeurIPS.
[4] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[5] Yuandong Tian,et al. One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers , 2019, NeurIPS.
[6] Yiran Chen,et al. Learning Structured Sparsity in Deep Neural Networks , 2016, NIPS.
[7] Rongrong Ji,et al. HRank: Filter Pruning Using High-Rank Feature Map , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[8] David J. Schwab,et al. The Early Phase of Neural Network Training , 2020, ICLR.
[9] Mehul Motani,et al. DropNet: Reducing Neural Network Complexity via Iterative Pruning , 2020, ICML.
[10] Gintare Karolina Dziugaite,et al. Stabilizing the Lottery Ticket Hypothesis , 2019 .
[11] Yiran Chen,et al. 2PFPCE: Two-Phase Filter Pruning Based on Conditional Entropy , 2018, ArXiv.
[12] Adam R. Klivans,et al. Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection , 2020, ICML.
[13] Alexander G. Gray,et al. Stochastic Alternating Direction Method of Multipliers , 2013, ICML.
[14] Jieping Ye,et al. AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates , 2020, AAAI.
[15] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[16] Stephen P. Boyd,et al. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..
[17] Xiangyu Zhang,et al. Channel Pruning for Accelerating Very Deep Neural Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[18] Ping Liu,et al. Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[20] Jiayu Li,et al. ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using Alternating Direction Methods of Multipliers , 2018, ASPLOS.
[21] Yue Wang,et al. Drawing early-bird tickets: Towards more efficient training of deep networks , 2019, ICLR.
[22] Michael Carbin,et al. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.
[23] Michael Carbin,et al. Comparing Rewinding and Fine-tuning in Neural Network Pruning , 2019, ICLR.
[24] Yanzhi Wang,et al. Systematic Weight Pruning of DNNs using Alternating Direction Method of Multipliers , 2018, ICLR.
[25] Gilad Yehudai,et al. Proving the Lottery Ticket Hypothesis: Pruning is All You Need , 2020, ICML.
[26] Gintare Karolina Dziugaite,et al. Pruning Neural Networks at Initialization: Why are We Missing the Mark? , 2020, ArXiv.
[27] Mingjie Sun,et al. Rethinking the Value of Network Pruning , 2018, ICLR.
[28] Hanwang Zhang,et al. Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).