暂无分享,去创建一个
[1] Beronda L. Montgomery,et al. Building and Sustaining Diverse Functioning Networks Using Social Media and Digital Platforms to Improve Diversity and Inclusivity , 2018, Front. Digit. Humanit..
[2] Nicholay Topin,et al. Super-convergence: very fast training of neural networks using large learning rates , 2018, Defense + Commercial Sensing.
[3] Nihar B. Shah,et al. On Testing for Biases in Peer Review , 2019, NeurIPS.
[4] Waleed Ammar,et al. Citation Count Analysis for Papers with Preprints , 2018, ArXiv.
[5] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[6] Quoc V. Le,et al. Searching for Activation Functions , 2018, arXiv.
[7] Min Zhang,et al. Reviewer bias in single- versus double-blind peer review , 2017, Proceedings of the National Academy of Sciences.
[8] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[9] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[10] Scott Krig,et al. Computer Vision Metrics , 2014, Apress.
[11] Leslie N. Smith,et al. Cyclical Learning Rates for Training Neural Networks , 2015, 2017 IEEE Winter Conference on Applications of Computer Vision (WACV).
[12] David G. Lowe,et al. Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.
[13] Richard R. Hamming. The Art of Doing Science and Engineering: Learning to Learn , 1997 .
[14] Leslie N. Smith,et al. A disciplined approach to neural network hyper-parameters: Part 1 - learning rate, batch size, momentum, and weight decay , 2018, ArXiv.
[15] Kevin Gimpel,et al. Gaussian Error Linear Units (GELUs) , 2016 .
[16] Animesh Garg,et al. De-anonymization of authors through arXiv submissions during double-blind review , 2020, ArXiv.