暂无分享,去创建一个
[1] Kevin Gimpel,et al. Gaussian Error Linear Units (GELUs) , 2016 .
[2] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[3] Quoc V. Le,et al. Swish: a Self-Gated Activation Function , 2017, 1710.05941.
[4] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[5] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[6] Sepp Hochreiter,et al. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.
[7] D. Brillinger,et al. Handbook of methods of applied statistics , 1967 .
[8] Demis Hassabis,et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play , 2018, Science.
[9] Bidyut Baran Chaudhuri,et al. LiSHT: Non-Parametric Linearly Scaled Hyperbolic Tangent Activation Function for Neural Networks , 2019, CVIP.