暂无分享,去创建一个
[1] Richard J. Mammone,et al. Meta-neural networks that learn by learning , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.
[2] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[3] Sebastian Thrun,et al. Learning to Learn , 1998, Springer US.
[4] Richard Hans Robert Hahnloser,et al. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit , 2000, Nature.
[5] Yann LeCun,et al. What is the best multi-stage architecture for object recognition? , 2009, 2009 IEEE 12th International Conference on Computer Vision.
[6] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[7] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[8] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[9] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[10] Yoshua Bengio,et al. Maxout Networks , 2013, ICML.
[11] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[12] Pierre Baldi,et al. Learning Activation Functions to Improve Deep Neural Networks , 2014, ICLR.
[13] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[14] Tianqi Chen,et al. Empirical Evaluation of Rectified Activations in Convolutional Network , 2015, ArXiv.
[15] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[16] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[17] Alex Graves,et al. Conditional Image Generation with PixelCNN Decoders , 2016, NIPS.
[18] Honglak Lee,et al. Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units , 2016, ICML.
[19] Ying Zhang,et al. On Multiplicative Integration with Recurrent Neural Networks , 2016, NIPS.
[20] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[21] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Kevin Gimpel,et al. Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units , 2016, ArXiv.
[24] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[25] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[26] Peter L. Bartlett,et al. RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning , 2016, ArXiv.
[27] Sepp Hochreiter,et al. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.
[28] Ivan Laptev,et al. Learnable pooling with Context Gating for video classification , 2017, ArXiv.
[29] Quoc V. Le,et al. Neural Optimizer Search with Reinforcement Learning , 2017, ICML.
[30] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[31] Zeb Kurth-Nelson,et al. Learning to reinforcement learn , 2016, CogSci.
[32] Hugo Larochelle,et al. Optimization as a Model for Few-Shot Learning , 2016, ICLR.
[33] Giambattista Parascandolo,et al. Taming the waves: sine as activation function in deep neural networks , 2017 .
[34] Bo Chen,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.
[35] Sepp Hochreiter,et al. Self-Normalizing Neural Networks , 2017, NIPS.
[36] Jun Wang,et al. Reinforcement Learning for Architecture Search by Network Transformation , 2017, ArXiv.
[37] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[38] Quoc V. Le,et al. Neural Architecture Search with Reinforcement Learning , 2016, ICLR.
[39] Junjie Yan,et al. Practical Network Blocks Design with Q-Learning , 2017, ArXiv.
[40] Yann Dauphin,et al. Language Modeling with Gated Convolutional Networks , 2016, ICML.
[41] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[42] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[43] Bolun Cai,et al. Flexible Rectified Linear Units for Improving Convolutional Neural Networks , 2017 .
[44] Alec Radford,et al. Proximal Policy Optimization Algorithms , 2017, ArXiv.
[45] Quoc V. Le,et al. Large-Scale Evolution of Image Classifiers , 2017, ICML.
[46] Kenji Doya,et al. Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning , 2017, Neural Networks.
[47] Guorui Zhou,et al. Deep Interest Network for Click-Through Rate Prediction , 2017, KDD.
[48] Vijay Vasudevan,et al. Learning Transferable Architectures for Scalable Image Recognition , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.