暂无分享,去创建一个
[1] Bolun Cai,et al. FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks , 2018, 2018 24th International Conference on Pattern Recognition (ICPR).
[2] Leslie N. Smith,et al. Cyclical Learning Rates for Training Neural Networks , 2015, 2017 IEEE Winter Conference on Applications of Computer Vision (WACV).
[3] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[4] Stephen Marshall,et al. Activation Functions: Comparison of trends in Practice and Research for Deep Learning , 2018, ArXiv.
[5] Brahim Chaib-draa,et al. Parametric Exponential Linear Unit for Deep Convolutional Neural Networks , 2016, 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA).
[6] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[7] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[8] Shun-ichi Amari,et al. Natural Gradient Works Efficiently in Learning , 1998, Neural Computation.
[9] Bolun Cai,et al. Flexible Rectified Linear Units for Improving Convolutional Neural Networks , 2017 .
[10] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[11] Brian Whitney,et al. Improving Deep Learning by Inverse Square Root Linear Units (ISRLUs) , 2017, ArXiv.
[12] Sepp Hochreiter,et al. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.
[13] Kevin Gimpel,et al. Gaussian Error Linear Units (GELUs) , 2016 .
[14] Saman Ghili,et al. Tiny ImageNet Visual Recognition Challenge , 2014 .
[15] Hao Wu,et al. Mixed Precision Training , 2017, ICLR.