CondenseNet: An Efficient DenseNet Using Learned Group Convolutions
暂无分享,去创建一个
Kilian Q. Weinberger | Laurens van der Maaten | Gao Huang | Shichen Liu | L. V. D. Maaten | Gao Huang | Shichen Liu | L. Maaten
[1] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[2] Gregory J. Wolff,et al. Optimal Brain Surgeon and general network pruning , 1993, IEEE International Conference on Neural Networks.
[3] Rich Caruana,et al. Model compression , 2006, KDD '06.
[4] M. Yuan,et al. Model selection and estimation in regression with grouped variables , 2006 .
[5] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[6] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[7] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[8] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[9] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[10] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[11] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[12] Qiang Chen,et al. Network In Network , 2013, ICLR.
[13] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[14] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[15] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[16] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[17] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[18] Jürgen Schmidhuber,et al. Training Very Deep Networks , 2015, NIPS.
[19] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[20] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[21] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[23] Yixin Chen,et al. Compressing Neural Networks with the Hashing Trick , 2015, ICML.
[24] Mathieu Salzmann,et al. Learning the Number of Neurons in Deep Networks , 2016, NIPS.
[25] Alex Graves,et al. Adaptive Computation Time for Recurrent Neural Networks , 2016, ArXiv.
[26] Frank Hutter,et al. SGDR: Stochastic Gradient Descent with Restarts , 2016, ArXiv.
[27] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[28] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, ArXiv.
[29] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Kilian Q. Weinberger,et al. Deep Networks with Stochastic Depth , 2016, ECCV.
[31] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[32] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[33] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[34] Jingdong Wang,et al. Interleaved Group Convolutions for Deep Neural Networks , 2017, ArXiv.
[35] Bo Chen,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.
[36] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Venkatesh Saligrama,et al. Adaptive Neural Networks for Fast Test-Time Prediction , 2017, ArXiv.
[38] Kilian Q. Weinberger,et al. Multi-Scale Dense Convolutional Networks for Efficient Prediction , 2017, ArXiv.
[39] Li Zhang,et al. Spatially Adaptive Computation Time for Residual Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[40] Zhiqiang Shen,et al. Learning Efficient Convolutional Networks through Network Slimming , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[41] Frank Hutter,et al. SGDR: Stochastic Gradient Descent with Warm Restarts , 2016, ICLR.
[42] Kilian Q. Weinberger,et al. Snapshot Ensembles: Train 1, get M for free , 2017, ICLR.
[43] Hanan Samet,et al. Pruning Filters for Efficient ConvNets , 2016, ICLR.
[44] Gregory Shakhnarovich,et al. FractalNet: Ultra-Deep Neural Networks without Residuals , 2016, ICLR.
[45] Xiangyu Zhang,et al. Channel Pruning for Accelerating Very Deep Neural Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[46] Zhuowen Tu,et al. Aggregated Residual Transformations for Deep Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[47] Kilian Q. Weinberger,et al. Multi-Scale Dense Networks for Resource Efficient Image Classification , 2017, ICLR.
[48] Xiangyu Zhang,et al. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[49] Vijay Vasudevan,et al. Learning Transferable Architectures for Scalable Image Recognition , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[50] Yueting Zhuang,et al. Deep Convolutional Neural Networks with Merge-and-Run Mappings , 2016, IJCAI.