暂无分享,去创建一个
Philip Bachman | Geoffrey J. Gordon | Hannes Schulz | Alessandro Sordoni | Remi Tachet des Combes | Nouha Dziri | Philip Bachman | Alessandro Sordoni | Hannes Schulz | Rémi Tachet des Combes | Nouha Dziri
[1] Alexander A. Alemi,et al. On Variational Bounds of Mutual Information , 2019, ICML.
[2] Jonathan Krause,et al. Collecting a Large-scale Dataset of Fine-grained Cars , 2013 .
[3] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[4] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.
[5] Allan Jabri,et al. Space-Time Correspondence as a Contrastive Random Walk , 2020, NeurIPS.
[6] Prafulla Dhariwal,et al. Glow: Generative Flow with Invertible 1x1 Convolutions , 2018, NeurIPS.
[7] Nikos Komodakis,et al. Unsupervised Representation Learning by Predicting Image Rotations , 2018, ICLR.
[8] Oriol Vinyals,et al. Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.
[9] Yoshua Bengio,et al. Learning deep representations by mutual information estimation and maximization , 2018, ICLR.
[10] Karl Stratos,et al. Formal Limitations on the Measurement of Mutual Information , 2018, AISTATS.
[11] Michael Tschannen,et al. On Mutual Information Maximization for Representation Learning , 2019, ICLR.
[12] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[13] David McAllester. Information Theoretic Co-Training , 2018, ArXiv.
[14] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[15] Phillip Isola,et al. Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere , 2020, ICML.
[16] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[17] Yejin Choi,et al. The Curious Case of Neural Text Degeneration , 2019, ICLR.
[18] Thomas Wolf,et al. TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents , 2019, ArXiv.
[19] Yee Whye Teh,et al. Tighter Variational Bounds are Not Necessarily Better , 2018, ICML.
[20] Stefano Ermon,et al. Understanding the Limitations of Variational Mutual Information Estimators , 2020, ICLR.
[21] R Devon Hjelm,et al. Learning Representations by Maximizing Mutual Information Across Views , 2019, NeurIPS.
[22] Zhuang Ma,et al. Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency , 2018, EMNLP.
[23] Jianfeng Gao,et al. DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation , 2020, ACL.
[24] David Barber,et al. The IM algorithm: a variational approach to Information Maximization , 2003, NIPS 2003.
[25] Stefano Ermon,et al. Multi-label Contrastive Predictive Coding , 2020, Neural Information Processing Systems.
[26] Kaiming He,et al. Improved Baselines with Momentum Contrastive Learning , 2020, ArXiv.
[27] Jason Weston,et al. Neural Text Generation with Unlikelihood Training , 2019, ICLR.
[28] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[29] Aapo Hyvärinen,et al. Noise-Contrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics , 2012, J. Mach. Learn. Res..
[30] Pietro Liò,et al. Deep Graph Infomax , 2018, ICLR.
[31] Andrew Zisserman,et al. Automated Flower Classification over a Large Number of Classes , 2008, 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing.
[32] Julien Mairal,et al. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments , 2020, NeurIPS.
[33] Jianfeng Gao,et al. A Diversity-Promoting Objective Function for Neural Conversation Models , 2015, NAACL.
[34] Cordelia Schmid,et al. What makes for good views for contrastive learning , 2020, NeurIPS.
[35] Michal Valko,et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.
[36] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[37] David Duvenaud,et al. Reinterpreting Importance-Weighted Autoencoders , 2017, ICLR.
[38] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[39] Ø. Skare,et al. Improved Sampling‐Importance Resampling and Reduced Bias Importance Sampling , 2003 .
[40] Michael U. Gutmann,et al. Telescoping Density-Ratio Estimation , 2020, NeurIPS.
[41] Jason Weston,et al. Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training , 2020, ACL.
[42] Rajesh P. N. Rao,et al. Predictive Coding , 2019, A Blueprint for the Hard Problem of Consciousness.
[43] R Devon Hjelm,et al. Deep Reinforcement and InfoMax Learning , 2020, Neural Information Processing Systems.
[44] Himanshu Asnani,et al. C-MI-GAN : Estimation of Conditional Mutual Information using MinMax formulation , 2020, UAI.
[45] Yoshua Bengio,et al. Mutual Information Neural Estimation , 2018, ICML.
[46] Paolo Favaro,et al. Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles , 2016, ECCV.
[47] Dong Wang,et al. Learning to Navigate for Fine-grained Classification , 2018, ECCV.
[48] Yee Whye Teh,et al. A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments , 2020, AISTATS.
[49] Zhe Gan,et al. Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization , 2018, NeurIPS.
[50] Phillip Isola,et al. Contrastive Multiview Coding , 2019, ECCV.
[51] Geoffrey E. Hinton. A Practical Guide to Training Restricted Boltzmann Machines , 2012, Neural Networks: Tricks of the Trade.
[52] Karl Stratos,et al. Mutual Information Maximization for Simple and Accurate Part-Of-Speech Induction , 2018, NAACL.
[53] Jason Weston,et al. Wizard of Wikipedia: Knowledge-Powered Conversational agents , 2018, ICLR.
[54] D. Rubin,et al. The calculation of posterior distributions by data augmentation , 1987 .
[55] David J. Fleet,et al. VSE++: Improving Visual-Semantic Embeddings with Hard Negatives , 2017, BMVC.
[56] 知秀 柴田. 5分で分かる!? 有名論文ナナメ読み:Jacob Devlin et al. : BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding , 2020 .
[57] Michael U. Gutmann,et al. Conditional Noise-Contrastive Estimation of Unnormalised Models , 2018, ICML.