暂无分享,去创建一个
Ningxin Zheng | Xuanyi Dong | Donglin Bai | Xinyang Jiang | Bo Li | Lu Liu | Dongsheng Li | Yuge Zhang | Yuqing Yang | Bo Li | Xuanyi Dong | Xinyang Jiang | Yuge Zhang | Yuqing Yang | Donglin Bai | Dongsheng Li | Ningxin Zheng | Lu Liu
[1] Li Fei-Fei,et al. Progressive Neural Architecture Search , 2017, ECCV.
[2] Ping Liu,et al. Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Rui Peng,et al. Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures , 2016, ArXiv.
[4] Daniel Soudry,et al. Post training 4-bit quantization of convolutional networks for rapid-deployment , 2018, NeurIPS.
[5] David Thorsley,et al. Post-training Piecewise Linear Quantization for Deep Neural Networks , 2020, ECCV.
[6] Yoni Choukroun,et al. Low-bit Quantization of Neural Networks for Efficient Inference , 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).
[7] Hanan Samet,et al. Pruning Filters for Efficient ConvNets , 2016, ICLR.
[8] Song Han,et al. ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware , 2018, ICLR.
[9] Yiming Yang,et al. DARTS: Differentiable Architecture Search , 2018, ICLR.
[10] Quoc V. Le,et al. Large-Scale Evolution of Image Classifiers , 2017, ICML.
[11] Anastasios Tefas,et al. Probabilistic Knowledge Transfer for Lightweight Deep Representation Learning , 2020, IEEE Transactions on Neural Networks and Learning Systems.
[12] Greg Mori,et al. Similarity-Preserving Knowledge Distillation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[13] Kurt Keutzer,et al. ZeroQ: A Novel Zero Shot Quantization Framework , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[15] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Inference , 2016, ICLR.
[16] Andrew McCallum,et al. Energy and Policy Considerations for Deep Learning in NLP , 2019, ACL.
[17] Yoshua Bengio,et al. Tackling Climate Change with Machine Learning , 2019, ACM Comput. Surv..
[18] Ian D. Reid,et al. Towards Effective Low-Bitwidth Convolutional Neural Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[19] Chen Liang,et al. Carbon Emissions and Large Neural Network Training , 2021, ArXiv.
[20] Yu Liu,et al. Correlation Congruence for Knowledge Distillation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[21] Edouard Grave,et al. Training with Quantization Noise for Extreme Model Compression , 2020, ICLR.
[22] Naiyan Wang,et al. Like What You Like: Knowledge Distill via Neuron Selectivity Transfer , 2017, ArXiv.
[23] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[24] Kalyanmoy Deb,et al. Pymoo: Multi-Objective Optimization in Python , 2020, IEEE Access.
[25] Nikos Komodakis,et al. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer , 2016, ICLR.
[26] Patrick T. Hester,et al. An Analysis of Multi-Criteria Decision Making Methods , 2013 .
[27] Yan Lu,et al. Relational Knowledge Distillation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[29] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[30] Yonglong Tian,et al. Contrastive Representation Distillation , 2019, ICLR.
[31] Jin Young Choi,et al. Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons , 2018, AAAI.
[32] Quoc V. Le,et al. Efficient Neural Architecture Search via Parameter Sharing , 2018, ICML.
[33] Gaurav Menghani,et al. Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better , 2021, ACM Comput. Surv..
[34] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[35] Junmo Kim,et al. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Neil D. Lawrence,et al. Variational Information Distillation for Knowledge Transfer , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[38] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[39] Chuang Gan,et al. Once for All: Train One Network and Specialize it for Efficient Deployment , 2019, ICLR.
[40] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[41] Jangho Kim,et al. Paraphrasing Complex Network: Network Compression via Factor Transfer , 2018, NeurIPS.
[42] T. Stocker,et al. Managing the risks of extreme events and disasters to advance climate change adaptation. Special report of the Intergovernmental Panel on Climate Change. , 2012 .
[43] Fan Yang,et al. Retiarii: A Deep Learning Exploratory-Training Framework , 2020, OSDI.
[44] Pavlo Molchanov,et al. Importance Estimation for Neural Network Pruning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).