Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks
暂无分享,去创建一个
Gang Wang | Zhe L. Lin | Simon See | Xiangfei Kong | Yap-Peng Tan | Jason Kuen | Jianxiong Yin | G. Wang | Yap-Peng Tan | Jason Kuen | Xiangfei Kong | S. See | Jianxiong Yin
[1] Jiri Matas,et al. Systematic evaluation of convolution neural network advances on the Imagenet , 2017, Comput. Vis. Image Underst..
[2] Alex Graves,et al. Adaptive Computation Time for Recurrent Neural Networks , 2016, ArXiv.
[3] Sepp Hochreiter,et al. Untersuchungen zu dynamischen neuronalen Netzen , 1991 .
[4] Kilian Q. Weinberger,et al. Deep Networks with Stochastic Depth , 2016, ECCV.
[5] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[6] Qiang Chen,et al. Network In Network , 2013, ICLR.
[7] Yann LeCun,et al. Regularization of Neural Networks using DropConnect , 2013, ICML.
[8] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[9] Zhuowen Tu,et al. Aggregated Residual Transformations for Deep Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Gregory Shakhnarovich,et al. FractalNet: Ultra-Deep Neural Networks without Residuals , 2016, ICLR.
[11] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[12] Andrew Zisserman,et al. Speeding up Convolutional Neural Networks with Low Rank Expansions , 2014, BMVC.
[13] Li Zhang,et al. Spatially Adaptive Computation Time for Residual Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Song Han,et al. Trained Ternary Quantization , 2016, ICLR.
[15] François Chollet,et al. Xception: Deep Learning with Depthwise Separable Convolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Jianxin Wu,et al. ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[18] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[19] Hanan Samet,et al. Pruning Filters for Efficient ConvNets , 2016, ICLR.
[20] H. T. Kung,et al. BranchyNet: Fast inference via early exiting from deep neural networks , 2016, 2016 23rd International Conference on Pattern Recognition (ICPR).
[21] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[22] Gang Wang,et al. An Empirical Study of Language CNN for Image Captioning , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[23] Iasonas Kokkinos,et al. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[24] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] David A. Forsyth,et al. Swapout: Learning an ensemble of deep architectures , 2016, NIPS.
[26] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Vivienne Sze,et al. Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[29] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[30] Ali Farhadi,et al. YOLO9000: Better, Faster, Stronger , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Jean Ponce,et al. A Theoretical Analysis of Feature Pooling in Visual Recognition , 2010, ICML.
[32] Ting Liu,et al. Recent advances in convolutional neural networks , 2015, Pattern Recognit..
[33] Steven Bohez,et al. Resource-constrained classification using a cascade of neural network layers , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).
[34] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[35] Kilian Q. Weinberger,et al. Memory-Efficient Implementation of DenseNets , 2017, ArXiv.
[36] Chen Sun,et al. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[37] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[38] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[39] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, ArXiv.
[40] Xiangyu Zhang,et al. Channel Pruning for Accelerating Very Deep Neural Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[41] Jian Sun,et al. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.