暂无分享,去创建一个
[1] Abhishek Kumar,et al. Score-Based Generative Modeling through Stochastic Differential Equations , 2020, ICLR.
[2] Han Zhang,et al. Self-Attention Generative Adversarial Networks , 2018, ICML.
[3] Paolo Favaro,et al. Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles , 2016, ECCV.
[4] Jiaming Song,et al. Denoising Diffusion Implicit Models , 2021, ICLR.
[5] Julien Mairal,et al. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments , 2020, NeurIPS.
[6] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[7] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[8] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[9] Timothy M. Hospedales,et al. How Well Do Self-Supervised Models Transfer? , 2020, ArXiv.
[10] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[11] Razvan Pascanu,et al. On the Number of Linear Regions of Deep Neural Networks , 2014, NIPS.
[12] Pascal Vincent,et al. Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[13] William T. Freeman,et al. Semantic Pyramid for Image Generation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Xinlei Chen,et al. Exploring Simple Siamese Representation Learning , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Yongxin Yang,et al. Deeper, Broader and Artier Domain Generalization , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[16] Nikos Komodakis,et al. Unsupervised Representation Learning by Predicting Image Rotations , 2018, ICLR.
[17] Geoffrey E. Hinton,et al. A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..
[18] Kaiming He,et al. Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Oriol Vinyals,et al. Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.
[20] Pieter Abbeel,et al. Denoising Diffusion Probabilistic Models , 2020, NeurIPS.
[21] Sebastian Ramos,et al. The Cityscapes Dataset for Semantic Urban Scene Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Laurens van der Maaten,et al. Self-Supervised Learning of Pretext-Invariant Representations , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Christopher K. I. Williams,et al. Inverting Supervised Representations with Autoregressive Neural Density Models , 2018, AISTATS.
[24] Michal Valko,et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.
[25] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[26] Bjorn Ommer,et al. A Disentangling Invertible Interpretation Network for Explaining Latent Representations , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Thomas Brox,et al. U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.
[28] Prafulla Dhariwal,et al. Diffusion Models Beat GANs on Image Synthesis , 2021, NeurIPS.
[29] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[30] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[31] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[32] Jean Ponce,et al. VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning , 2021, ArXiv.
[33] Surya Ganguli,et al. Deep Unsupervised Learning using Nonequilibrium Thermodynamics , 2015, ICML.
[34] Jonathon Shlens,et al. A Learned Representation For Artistic Style , 2016, ICLR.
[35] Julien Mairal,et al. Emerging Properties in Self-Supervised Vision Transformers , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[36] Prafulla Dhariwal,et al. Improved Denoising Diffusion Probabilistic Models , 2021, ICML.
[37] Pascal Vincent,et al. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..
[38] Yoshua Bengio,et al. Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.
[39] Bjorn Ommer,et al. Making Sense of CNNs: Interpreting Deep Representations & Their Invariances with INNs , 2020, ECCV.
[40] Yusheng Xie,et al. Towards Good Practices in Self-supervised Representation Learning , 2020, ArXiv.
[41] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[42] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.
[43] Stefano Ermon,et al. Improved Techniques for Training Score-Based Generative Models , 2020, NeurIPS.
[44] Yang Song,et al. Generative Modeling by Estimating Gradients of the Data Distribution , 2019, NeurIPS.
[45] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[46] Alex Graves,et al. Conditional Image Generation with PixelCNN Decoders , 2016, NIPS.
[47] Pascal Vincent,et al. Visualizing Higher-Layer Features of a Deep Network , 2009 .
[48] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[49] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[50] Behnaam Aazhang,et al. The Geometry of Deep Networks: Power Diagram Subdivision , 2019, NeurIPS.
[51] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[52] G. Petrova,et al. Nonlinear Approximation and (Deep) ReLU\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {ReLU}$$\end{document} , 2019, Constructive Approximation.
[53] Yann LeCun,et al. Barlow Twins: Self-Supervised Learning via Redundancy Reduction , 2021, ICML.
[54] Simon Osindero,et al. Conditional Generative Adversarial Nets , 2014, ArXiv.
[55] Leon A. Gatys,et al. Diverse feature visualizations reveal invariances in early layers of deep neural networks , 2018, ECCV.
[56] Max Welling,et al. Improved Variational Inference with Inverse Autoregressive Flow , 2016, NIPS 2016.
[57] Richard G. Baraniuk,et al. A Spline Theory of Deep Learning , 2018, ICML 2018.
[58] Geoffrey E. Hinton,et al. Restricted Boltzmann machines for collaborative filtering , 2007, ICML '07.
[59] Abhishek Das,et al. Grad-CAM: Why did you say that? , 2016, ArXiv.
[60] Jaakko Lehtinen,et al. Analyzing and Improving the Image Quality of StyleGAN , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[61] Koray Kavukcuoglu,et al. Pixel Recurrent Neural Networks , 2016, ICML.
[62] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[63] Stephen Lin,et al. What makes instance discrimination good for transfer learning? , 2020, ICLR.
[64] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[65] Jiebo Luo,et al. AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations Rather Than Data , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).