A Theoretical Analysis of Contrastive Unsupervised Representation Learning
暂无分享,去创建一个
Mikhail Khodak | Sanjeev Arora | Nikunj Saunshi | Hrishikesh Khandeparkar | Orestis Plevrakis | Sanjeev Arora | M. Khodak | Nikunj Saunshi | H. Khandeparkar | Orestis Plevrakis
[1] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[2] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[3] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[4] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[5] Avrim Blum,et al. The Bottleneck , 2021, Monopsony Capitalism.
[6] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[7] Sanjeev Arora,et al. Provable benefits of representation learning , 2017, ArXiv.
[8] Marc Sebban,et al. A Survey on Metric Learning for Feature Vectors and Structured Data , 2013, ArXiv.
[9] Richard S. Zemel,et al. Prototypical Networks for Few-shot Learning , 2017, NIPS.
[10] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[11] Zhuang Ma,et al. Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency , 2018, EMNLP.
[12] Massimiliano Pontil,et al. The Benefit of Multitask Representation Learning , 2015, J. Mach. Learn. Res..
[13] Marc Sebban,et al. Similarity Learning for Provably Accurate Sparse Linear Classification , 2012, ICML.
[14] Yoshua Bengio,et al. Gated Feedback Recurrent Neural Networks , 2015, ICML.
[15] Stefano Ermon,et al. Tile2Vec: Unsupervised representation learning for spatially distributed data , 2018, AAAI.
[16] Kihyuk Sohn,et al. Improved Deep Metric Learning with Multi-class N-pair Loss Objective , 2016, NIPS.
[17] Mehryar Mohri,et al. Two-Stage Learning Kernel Algorithms , 2010, ICML.
[18] Chris Dyer,et al. Notes on Noise Contrastive Estimation and Negative Sampling , 2014, ArXiv.
[19] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[20] Honglak Lee,et al. An efficient framework for learning sentence representations , 2018, ICLR.
[21] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[22] Tengyu Ma,et al. A Non-generative Framework and Convex Relaxations for Unsupervised Learning , 2016, NIPS.
[23] Nitish Srivastava. Unsupervised Learning of Visual Representations using Videos , 2015 .
[24] Matteo Pagliardini,et al. Unsupervised Learning of Sentence Embeddings Using Compositional n-Gram Features , 2017, NAACL.
[25] Andreas Maurer,et al. A Vector-Contraction Inequality for Rademacher Complexities , 2016, ALT.
[26] Ameet Talwalkar,et al. Foundations of Machine Learning , 2012, Adaptive computation and machine learning.
[27] Koray Kavukcuoglu,et al. A Binary Classification Framework for Two-Stage Multiple Kernel Learning , 2012, ICML.
[28] Aapo Hyvärinen,et al. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models , 2010, AISTATS.