Demystifying Inter-Class Disentanglement
暂无分享,去创建一个
[1] Lior Wolf,et al. A Two-Step Disentanglement Method , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[2] Jung-Woo Ha,et al. StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[3] Roger B. Grosse,et al. Isolating Sources of Disentanglement in Variational Autoencoders , 2018, NeurIPS.
[4] David Lopez-Paz,et al. Optimizing the Latent Space of Generative Networks , 2017, ICML.
[5] Lior Wolf,et al. NAM: Non-Adversarial Unsupervised Domain Mapping , 2018, ECCV.
[6] Vighnesh Birodkar,et al. Unsupervised Learning of Disentangled Representations from Video , 2017, NIPS.
[7] Kristen Grauman,et al. Fine-Grained Visual Comparisons with Local Learning , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[8] Yuting Zhang,et al. Deep Visual Analogy-Making , 2015, NIPS.
[9] Michael C. Mozer,et al. Learning Deep Disentangled Embeddings with the F-Statistic Loss , 2018, NeurIPS.
[10] Andriy Mnih,et al. Disentangling by Factorising , 2018, ICML.
[11] Y. LeCun,et al. Learning methods for generic object recognition with invariance to pose and lighting , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..
[12] Mubarak Shah,et al. Recognizing human actions , 2005, VSSN@MM.
[13] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.
[14] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[15] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[16] Jiaying Liu,et al. Demystifying Neural Style Transfer , 2017, IJCAI.
[17] Makoto Yamada,et al. Learning Unsupervised Word Translations Without Adversaries , 2018, EMNLP.
[18] Maneesh Kumar Singh,et al. Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders , 2018, ECCV.
[19] Jonathon Shlens,et al. A Learned Representation For Artistic Style , 2016, ICLR.
[20] Yann LeCun,et al. Disentangling factors of variation in deep representation using adversarial training , 2016, NIPS.
[21] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[22] Bernhard Schölkopf,et al. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations , 2018, ICML.
[23] Alexei A. Efros,et al. Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Matthias Zwicker,et al. Challenges in Disentangling Independent Factors of Variation , 2017, ICLR.
[25] Jan Kautz,et al. Unsupervised Image-to-Image Translation Networks , 2017, NIPS.
[26] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[27] Barbara Caputo,et al. Recognizing human actions: a local SVM approach , 2004, ICPR 2004.
[28] Alexei A. Efros,et al. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[29] Skyler T. Hawk,et al. Presentation and validation of the Radboud Faces Database , 2010 .
[30] Ali Razavi,et al. Generating Diverse High-Fidelity Images with VQ-VAE-2 , 2019, NeurIPS.
[31] Sebastian Nowozin,et al. Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations , 2017, AAAI.
[32] Jitendra Malik,et al. Non-Adversarial Image Synthesis With Generative Latent Nearest Neighbors , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[33] Timo Aila,et al. A Style-Based Generator Architecture for Generative Adversarial Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Leon A. Gatys,et al. Image Style Transfer Using Convolutional Neural Networks , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).