暂无分享,去创建一个
Xiao-Tong Yuan | Caiming Xiong | Steven Hoi | Pan Zhou | Pan Zhou | Caiming Xiong | S. Hoi | Xiaotong Yuan
[1] E. Xing,et al. Iterative Graph Self-Distillation , 2020, IEEE Transactions on Knowledge and Data Engineering.
[2] Xinlei Chen,et al. Exploring Simple Siamese Representation Learning , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[4] Nikos Komodakis,et al. Unsupervised Representation Learning by Predicting Image Rotations , 2018, ICLR.
[5] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[6] Samet Oymak,et al. Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks , 2019, AISTATS.
[7] Oriol Vinyals,et al. Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.
[8] Samet Oymak,et al. Toward Moderate Overparameterization: Global Convergence Guarantees for Training Shallow Neural Networks , 2019, IEEE Journal on Selected Areas in Information Theory.
[9] Liwei Wang,et al. The Expressive Power of Neural Networks: A View from the Width , 2017, NIPS.
[10] Laurens van der Maaten,et al. Self-Supervised Learning of Pretext-Invariant Representations , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Ali Razavi,et al. Data-Efficient Image Recognition with Contrastive Predictive Coding , 2019, ICML.
[12] Jiashi Feng,et al. Empirical Risk Landscape Analysis for Understanding Deep Neural Networks , 2018, ICLR.
[13] Julien Mairal,et al. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments , 2020, NeurIPS.
[14] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[15] Caiming Xiong,et al. Task similarity aware meta learning: theory-inspired improvement on MAML , 2021, UAI.
[16] Tony X. Han,et al. Learning Efficient Object Detection Models with Knowledge Distillation , 2017, NIPS.
[17] Mikhail Khodak,et al. A Theoretical Analysis of Contrastive Unsupervised Representation Learning , 2019, ICML.
[18] Massimiliano Pontil,et al. Empirical Bernstein Bounds and Sample-Variance Penalization , 2009, COLT.
[19] R Devon Hjelm,et al. Learning Representations by Maximizing Mutual Information Across Views , 2019, NeurIPS.
[20] Yann Dauphin,et al. Empirical Analysis of the Hessian of Over-Parametrized Neural Networks , 2017, ICLR.
[21] Ali Farhadi,et al. Label Refinery: Improving ImageNet Classification through Label Progression , 2018, ArXiv.
[22] Pan Zhou,et al. How Important is the Train-Validation Split in Meta-Learning? , 2020, ICML.
[23] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[24] Kaiming He,et al. Improved Baselines with Momentum Contrastive Learning , 2020, ArXiv.
[25] Pan Zhou,et al. Theory-Inspired Path-Regularized Differential Network Architecture Search , 2020, NeurIPS.
[26] Cordelia Schmid,et al. What makes for good views for contrastive learning , 2020, NeurIPS.
[27] Alexei A. Efros,et al. Colorful Image Colorization , 2016, ECCV.
[28] Matus Telgarsky,et al. Benefits of Depth in Neural Networks , 2016, COLT.
[29] Michal Valko,et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.
[30] Wei Shen,et al. CO2: Consistent Contrast for Unsupervised Visual Representation Learning , 2020, ICLR.
[31] Pan Zhou,et al. Towards Theoretically Understanding Why SGD Generalizes Better Than ADAM in Deep Learning , 2020, NeurIPS.
[32] Dumitru Erhan,et al. Training Deep Neural Networks on Noisy Labels with Bootstrapping , 2014, ICLR.
[33] Yoshua Bengio,et al. Learning deep representations by mutual information estimation and maximization , 2018, ICLR.
[34] Nadav Cohen,et al. On the Expressive Power of Deep Learning: A Tensor Analysis , 2015, COLT 2016.
[35] Shuicheng Yan,et al. Efficient Meta Learning via Minibatch Proximal Update , 2019, NeurIPS.
[36] Yuanzhi Li,et al. A Convergence Theory for Deep Learning via Over-Parameterization , 2018, ICML.
[37] Liwei Wang,et al. Gradient Descent Finds Global Minima of Deep Neural Networks , 2018, ICML.
[38] Gihun Lee,et al. MixCo: Mix-up Contrastive Learning for Visual Representation , 2020, ArXiv.
[39] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[40] Andrea Vedaldi,et al. Self-labelling via simultaneous clustering and representation learning , 2020, ICLR.
[41] Jürgen Schmidhuber,et al. Flat Minima , 1997, Neural Computation.
[42] Ching-Yao Chuang,et al. Debiased Contrastive Learning , 2020, NeurIPS.
[43] Samet Oymak,et al. Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path? , 2018, ICML.
[44] Junnan Li,et al. Prototypical Contrastive Learning of Unsupervised Representations , 2020, ICLR.
[45] Frank Hutter,et al. SGDR: Stochastic Gradient Descent with Warm Restarts , 2016, ICLR.
[46] Eric Xing,et al. Prototypical Graph Contrastive Learning , 2021, ArXiv.
[47] Luc Van Gool,et al. The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.
[48] Quoc V. Le,et al. Towards Domain-Agnostic Contrastive Learning , 2020, ICML.
[49] Yann LeCun,et al. Dimensionality Reduction by Learning an Invariant Mapping , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).
[50] Matthieu Cord,et al. Learning Representations by Predicting Bags of Visual Words , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[51] Kaiming He,et al. Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[52] Shuicheng Yan,et al. Towards Understanding Why Lookahead Generalizes Better Than SGD and Beyond , 2021, NeurIPS.
[53] Andrew Zisserman,et al. Multi-task Self-Supervised Visual Learning , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[54] Barnabás Póczos,et al. Gradient Descent Provably Optimizes Over-parameterized Neural Networks , 2018, ICLR.
[55] Jiashi Feng,et al. Understanding Generalization and Optimization Performance of Deep CNNs , 2018, ICML.
[56] Yuanzhi Li,et al. Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data , 2018, NeurIPS.
[57] Paolo Favaro,et al. Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles , 2016, ECCV.
[58] Ohad Shamir,et al. The Power of Depth for Feedforward Neural Networks , 2015, COLT.
[59] Fei Wang,et al. Deep learning for healthcare: review, opportunities and challenges , 2018, Briefings Bioinform..
[60] Geoffrey E. Hinton,et al. When Does Label Smoothing Help? , 2019, NeurIPS.
[61] Le Song,et al. Diverse Neural Network Learns True Target Functions , 2016, AISTATS.
[62] Jeff Donahue,et al. Large Scale Adversarial Representation Learning , 2019, NeurIPS.
[63] R. Srikant,et al. Why Deep Neural Networks for Function Approximation? , 2016, ICLR.
[64] Chengxu Zhuang,et al. Local Aggregation for Unsupervised Learning of Visual Embeddings , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[65] Kibok Lee,et al. i-Mix: A Strategy for Regularizing Contrastive Representation Learning , 2020, ArXiv.
[66] Geoffrey E. Hinton,et al. Big Self-Supervised Models are Strong Semi-Supervised Learners , 2020, NeurIPS.
[67] Phillip Isola,et al. Contrastive Multiview Coding , 2019, ECCV.
[68] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[69] David Berthelot,et al. MixMatch: A Holistic Approach to Semi-Supervised Learning , 2019, NeurIPS.
[70] Guo-Jun Qi,et al. Contrastive Learning With Stronger Augmentations , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[71] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[72] Stella X. Yu,et al. Unsupervised Feature Learning via Non-parametric Instance Discrimination , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[73] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).