Transporting Labels via Hierarchical Optimal Transport for Semi-Supervised Learning
暂无分享,去创建一个
Nasser M. Nasrabadi | Sobhan Soleymani | Fariborz Taherkhani | Ali Dabouei | Jeremy M. Dawson | N. Nasrabadi | J. Dawson | Fariborz Taherkhani | Ali Dabouei | Sobhan Soleymani
[1] Jia Li,et al. Aggregated Wasserstein Distance and State Registration for Hidden Markov Models , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[2] Eva L. Dyer,et al. Hierarchical Optimal Transport for Multimodal Distribution Alignment , 2019, NeurIPS.
[3] Justin Solomon,et al. Hierarchical Optimal Transport for Document Representation , 2019, NeurIPS.
[4] Yannis Avrithis,et al. Label Propagation for Deep Semi-Supervised Learning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Chen-Yu Lee,et al. Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Joost van de Weijer,et al. Exploiting Unlabeled Data in CNNs by Self-Supervised Learning to Rank , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[7] Gabriel Peyré,et al. Sample Complexity of Sinkhorn Divergences , 2018, AISTATS.
[8] Zhanxing Zhu,et al. Tangent-Normal Adversarial Regularization for Semi-Supervised Learning , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Andrew Gordon Wilson,et al. There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average , 2018, ICLR.
[10] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[11] Nanning Zheng,et al. Transductive Semi-Supervised Deep Learning Using Min-Max Features , 2018, ECCV.
[12] Chao Yang,et al. A Survey on Deep Transfer Learning , 2018, ICANN.
[13] Wen Li,et al. Semi-Supervised Optimal Transport for Heterogeneous Domain Adaptation , 2018, IJCAI.
[14] Zhi-Hua Zhou,et al. Tri-net for Semi-Supervised Deep Learning , 2018, IJCAI.
[15] Yalin Wang,et al. Variational Wasserstein Clustering , 2018, ECCV.
[16] Nicolas Courty,et al. DeepJDOT: Deep Joint distribution optimal transport for unsupervised domain adaptation , 2018, ECCV.
[17] Colin Raffel,et al. Realistic Evaluation of Deep Semi-Supervised Learning Algorithms , 2018, NeurIPS.
[18] Sam Kwong,et al. Semi-Supervised Spectral Clustering With Structured Sparsity Regularization , 2018, IEEE Signal Processing Letters.
[19] Tommi S. Jaakkola,et al. Structured Optimal Transport , 2018, AISTATS.
[20] Bo Zhang,et al. Smooth Neighbors on Teacher Graphs for Semi-Supervised Learning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[21] Jian Shen,et al. Wasserstein Distance Guided Representation Learning for Domain Adaptation , 2017, AAAI.
[22] Shun-ichi Amari,et al. Information geometry connecting Wasserstein distance and Kullback–Leibler divergence via the entropy-relaxed transportation problem , 2017, Information Geometry.
[23] Gustavo K. Rohde,et al. Optimal Mass Transport: Signal processing and machine-learning applications , 2017, IEEE Signal Processing Magazine.
[24] Dinh Q. Phung,et al. Multilevel Clustering via Wasserstein Means , 2017, ICML.
[25] Harri Valpola,et al. Weight-averaged consistency targets improve semi-supervised deep learning results , 2017, ArXiv.
[26] Léon Bottou,et al. Wasserstein GAN , 2017, ArXiv.
[27] Geoffrey E. Hinton,et al. Regularizing Neural Networks by Penalizing Confident Output Distributions , 2017, ICLR.
[28] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.
[29] James Zijun Wang,et al. Fast Discrete Distribution Clustering Using Wasserstein Barycenter With Sparse Support , 2015, IEEE Transactions on Signal Processing.
[30] Christine Guillemot,et al. A study of the classification of low-dimensional data with supervised manifold learning , 2015, J. Mach. Learn. Res..
[31] Nicolas Courty,et al. Optimal Transport for Domain Adaptation , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[32] Yu Qiao,et al. A Discriminative Feature Learning Approach for Deep Face Recognition , 2016, ECCV.
[33] Tolga Tasdizen,et al. Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning , 2016, NIPS.
[34] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[35] Shun-ichi Amari,et al. Information Geometry and Its Applications , 2016 .
[36] Yang Zou,et al. Sliced Wasserstein Kernels for Probability Distributions , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Steffen Borgwardt,et al. Discrete Wasserstein barycenters: optimal transport for discrete data , 2015, Mathematical Methods of Operations Research.
[38] X. Nguyen. Borrowing strengh in hierarchical Bayes: Posterior concentration of the Dirichlet base measure , 2016 .
[39] J. A. Cuesta-Albertos,et al. A fixed-point approach to barycenters in Wasserstein space , 2015, 1511.05355.
[40] F. Santambrogio. Optimal Transport for Applied Mathematicians: Calculus of Variations, PDEs, and Modeling , 2015 .
[41] Gabriel Peyré,et al. Convolutional wasserstein distances , 2015, ACM Trans. Graph..
[42] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.
[43] Hossein Mobahi,et al. Learning with a Wasserstein Loss , 2015, NIPS.
[44] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[45] Filippo Santambrogio,et al. Optimal Transport for Applied Mathematicians , 2015 .
[46] Philip Bachman,et al. Learning with Pseudo-Ensembles , 2014, NIPS.
[47] Arnaud Doucet,et al. Fast Computation of Wasserstein Barycenters , 2013, ICML.
[48] Marco Cuturi,et al. Sinkhorn Distances: Lightspeed Computation of Optimal Transport , 2013, NIPS.
[49] Christoph Schnörr,et al. A Hierarchical Approach to Optimal Transport , 2013, SSVM.
[50] XuanLong Nguyen. Borrowing strength in hierarchical Bayes: convergence of the Dirichlet base measure , 2013, ArXiv.
[51] Dong-Hyun Lee,et al. Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks , 2013 .
[52] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[53] Guillaume Carlier,et al. Barycenters in the Wasserstein Space , 2011, SIAM J. Math. Anal..
[54] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[55] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[56] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[57] Philippe Thomas,et al. Semi-Supervised Learning by Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien (Review) , 2009 .
[58] C. Villani. Optimal Transport: Old and New , 2008 .
[59] Mikhail Belkin,et al. Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples , 2006, J. Mach. Learn. Res..
[60] Bernhard Schölkopf,et al. Learning with Local and Global Consistency , 2003, NIPS.
[61] Bernhard Schölkopf,et al. Cluster Kernels for Semi-Supervised Learning , 2002, NIPS.
[62] John N. Tsitsiklis,et al. Introduction to linear optimization , 1997, Athena scientific optimization and computation series.
[63] David Pollard,et al. Quantization and the method of k -means , 1982, IEEE Trans. Inf. Theory.