暂无分享,去创建一个
Razvan Pascanu | Michal Valko | Florian Strub | Bilal Piot | Jean-Bastien Grill | Soham De | Pierre H. Richemond | Corentin Tallec | Andrew Brock | Florent Altch'e | Samuel Smith | Bilal Piot | Razvan Pascanu | Andrew Brock | Samuel L. Smith | Michal Valko | Corentin Tallec | Florian Strub | Soham De | Jean-Bastien Grill | Florent Altch'e
[1] Alexei A. Efros,et al. Colorful Image Colorization , 2016, ECCV.
[2] Harri Valpola,et al. Weight-averaged consistency targets improve semi-supervised deep learning results , 2017, ArXiv.
[3] Xinlei Chen,et al. Understanding Self-supervised Learning with Dual Deep Networks , 2020, ArXiv.
[4] Samuel S. Schoenholz,et al. Mean Field Residual Networks: On the Edge of Chaos , 2017, NIPS.
[5] Ali Razavi,et al. Data-Efficient Image Recognition with Contrastive Predictive Coding , 2019, ICML.
[6] R Devon Hjelm,et al. Learning Representations by Maximizing Mutual Information Across Views , 2019, NeurIPS.
[7] Jascha Sohl-Dickstein,et al. A Mean Field Theory of Batch Normalization , 2019, ICLR.
[8] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[9] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[10] Michal Valko,et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.
[11] Kaiming He,et al. Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Carla P. Gomes,et al. Understanding Batch Normalization , 2018, NeurIPS.
[13] Jiri Matas,et al. All you need is a good init , 2015, ICLR.
[14] Kaiming He,et al. Group Normalization , 2018, ECCV.
[15] Alexander A. Alemi,et al. On Variational Bounds of Mutual Information , 2019, ICML.
[16] R Devon Hjelm,et al. Data-Efficient Reinforcement Learning with Momentum Predictive Representations , 2020, ArXiv.
[17] Ronen Basri,et al. SpectralNet: Spectral Clustering using Deep Neural Networks , 2018, ICLR.
[18] Jascha Sohl-Dickstein,et al. Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10, 000-Layer Vanilla Convolutional Neural Networks , 2018, ICML.
[19] Julien Mairal,et al. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments , 2020, NeurIPS.
[20] Oriol Vinyals,et al. Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.
[21] Trevor Darrell,et al. Data-dependent Initializations of Convolutional Neural Networks , 2015, ICLR.
[22] Geoffrey E. Hinton,et al. Layer Normalization , 2016, ArXiv.
[23] Yang You,et al. Scaling SGD Batch Size to 32K for ImageNet Training , 2017, ArXiv.
[24] Matthijs Douze,et al. Deep Clustering for Unsupervised Learning of Visual Features , 2018, ECCV.
[25] Chenxi Liu,et al. Micro-Batch Training with Batch-Channel Normalization and Weight Standardization , 2019 .
[26] Andrea Vedaldi,et al. Instance Normalization: The Missing Ingredient for Fast Stylization , 2016, ArXiv.
[27] Geoffrey E. Hinton,et al. Big Self-Supervised Models are Strong Semi-Supervised Learners , 2020, NeurIPS.
[28] Kaiming He,et al. Improved Baselines with Momentum Contrastive Learning , 2020, ArXiv.
[29] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[30] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.