Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units
暂无分享,去创建一个
Honglak Lee | Kihyuk Sohn | Wenling Shang | Diogo Almeida | Honglak Lee | Kihyuk Sohn | Wenling Shang | Diogo Almeida
[1] O. Christensen. An introduction to frames and Riesz bases , 2002 .
[2] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[3] Yann LeCun,et al. What is the best multi-stage architecture for object recognition? , 2009, 2009 IEEE 12th International Conference on Computer Vision.
[4] Quoc V. Le,et al. Measuring Invariances in Deep Networks , 2009, NIPS.
[5] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[6] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[7] Andrew Y. Ng,et al. The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization , 2011, ICML.
[8] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[9] Rob Fergus,et al. Stochastic Pooling for Regularization of Deep Convolutional Neural Networks , 2013, ICLR.
[10] Stéphane Mallat,et al. Invariant Scattering Convolution Networks , 2012, IEEE transactions on pattern analysis and machine intelligence.
[11] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[12] Yann LeCun,et al. Regularization of Neural Networks using DropConnect , 2013, ICML.
[13] Yoshua Bengio,et al. Maxout Networks , 2013, ICML.
[14] Trevor Darrell,et al. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[15] Joan Bruna,et al. Signal recovery from Pooling Representations , 2013, ICML.
[16] Qiang Chen,et al. Network In Network , 2013, ICLR.
[17] Jasper Snoek,et al. Spectral Representations for Convolutional Neural Networks , 2015, NIPS.
[18] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[19] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[20] Le Song,et al. Deep Fried Convnets , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[21] Xiaolin Hu,et al. Recurrent convolutional neural network for object recognition , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[23] Andrea Vedaldi,et al. Understanding deep image representations by inverting them , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Minho Lee,et al. Convolutional Neural Network with Biologically Inspired ON/OFF ReLU , 2015, ICONIP.
[25] Prabhat,et al. Scalable Bayesian Optimization Using Deep Neural Networks , 2015, ICML.
[26] Thomas Brox,et al. Inverting Convolutional Networks with Convolutional Networks , 2015, ArXiv.
[27] Tianqi Chen,et al. Empirical Evaluation of Rectified Activations in Convolutional Network , 2015, ArXiv.
[28] Jürgen Schmidhuber,et al. Training Very Deep Networks , 2015, NIPS.
[29] Yann LeCun,et al. Stacked What-Where Auto-encoders , 2015, ArXiv.
[30] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[32] Zhuowen Tu,et al. Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree , 2015, AISTATS.
[33] Matthieu Cord,et al. Max-min convolutional neural networks for image classification , 2016, 2016 IEEE International Conference on Image Processing (ICIP).