Densely Connected Convolutional Networks
暂无分享,去创建一个
[1] Christian Lebiere,et al. The Cascade-Correlation Learning Architecture , 1989, NIPS.
[2] Lawrence D. Jackel,et al. Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.
[3] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[4] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[5] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[6] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[7] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[8] Hao Yu,et al. Neural Network Learning Without Backpropagation , 2010, IEEE Transactions on Neural Networks.
[9] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[10] Clément Farabet,et al. Torch7: A Matlab-like Environment for Machine Learning , 2011, NIPS 2011.
[11] Yoshua Bengio,et al. Deep Sparse Rectifier Neural Networks , 2011, AISTATS.
[12] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[13] Yann LeCun,et al. Convolutional neural networks applied to house numbers digit classification , 2012, Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012).
[14] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[15] Yann LeCun,et al. Pedestrian Detection with Unsupervised Multi-stage Feature Learning , 2012, 2013 IEEE Conference on Computer Vision and Pattern Recognition.
[16] Geoffrey E. Hinton,et al. On the importance of initialization and momentum in deep learning , 2013, ICML.
[17] Yoshua Bengio,et al. Maxout Networks , 2013, ICML.
[18] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[19] Qiang Chen,et al. Network In Network , 2013, ICLR.
[20] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[21] Matt J. Kusner,et al. Deep Manifold Traversal: Changing Labels with Convolutional Features , 2015, ArXiv.
[22] Jitendra Malik,et al. Hypercolumns for object segmentation and fine-grained localization , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[24] Songfan Yang,et al. Multi-scale Recognition with DAG-CNNs , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[25] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.
[26] Jürgen Schmidhuber,et al. Training Very Deep Networks , 2015, NIPS.
[27] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[28] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[29] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[31] Leon A. Gatys,et al. A Neural Algorithm of Artistic Style , 2015, ArXiv.
[32] Zhuowen Tu,et al. Deeply-Supervised Nets , 2014, AISTATS.
[33] Diogo Almeida,et al. Resnet in Resnet: Generalizing Residual Architectures , 2016, ArXiv.
[34] Wenjun Zeng,et al. Deeply-Fused Nets , 2016, ArXiv.
[35] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[36] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Yoshua Bengio,et al. Deconstructing the Ladder Network Architecture , 2015, ICML.
[38] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[39] Kilian Q. Weinberger,et al. Deep Networks with Stochastic Depth , 2016, ECCV.
[40] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[41] Kibok Lee,et al. Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification , 2016, ICML.
[42] Tomaso A. Poggio,et al. Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex , 2016, ArXiv.
[43] Trevor Darrell,et al. Fully Convolutional Networks for Semantic Segmentation , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[44] Mehryar Mohri,et al. AdaNet: Adaptive Structural Learning of Artificial Neural Networks , 2016, ICML.
[45] Kilian Q. Weinberger,et al. Memory-Efficient Implementation of DenseNets , 2017, ArXiv.
[46] Gregory Shakhnarovich,et al. FractalNet: Ultra-Deep Neural Networks without Residuals , 2016, ICLR.