暂无分享,去创建一个
[1] Tommi S. Jaakkola,et al. Sequence to Better Sequence: Continuous Revision of Combinatorial Structures , 2017, ICML.
[2] Geoffrey E. Hinton,et al. Reducing the Dimensionality of Data with Neural Networks , 2006, Science.
[3] Yarin Gal,et al. Uncertainty in Deep Learning , 2016 .
[4] Geoffrey E. Hinton,et al. Autoencoders, Minimum Description Length and Helmholtz Free Energy , 1993, NIPS.
[5] Chuang Gan,et al. The Neuro-Symbolic Concept Learner: Interpreting Scenes Words and Sentences from Natural Supervision , 2019, ICLR.
[6] Yee Whye Teh,et al. Disentangling Disentanglement in Variational Autoencoders , 2018, ICML.
[7] Sébastien Marcel,et al. DeepFakes: a New Threat to Face Recognition? Assessment and Detection , 2018, ArXiv.
[8] Stefano Ermon,et al. Learning Hierarchical Features from Deep Generative Models , 2017, ICML.
[9] Kevin Murphy,et al. Generative Models of Visually Grounded Imagination , 2017, ICLR.
[10] Joshua B. Tenenbaum,et al. Mapping a Manifold of Perceptual Observations , 1997, NIPS.
[11] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[12] Iranga Samindani Weerakkody. චත්තාරික සමය හා බැඳි සාම්ප්රධායික පසන් ගායන ශෛලිය පිළිබඳ අධ්යයනයක් (Unpublished doctoral dissertation) , 2017 .
[13] Joshua B. Tenenbaum,et al. Human-level concept learning through probabilistic program induction , 2015, Science.
[14] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[15] Max Welling,et al. Semi-supervised Learning with Deep Generative Models , 2014, NIPS.
[16] Philip H. S. Torr,et al. Variational Mixture-of-Experts Autoencoders for Multi-Modal Deep Generative Models , 2019, NeurIPS.
[17] Pascal Vincent,et al. Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[18] Mike Wu,et al. Multimodal Generative Models for Scalable Weakly-Supervised Learning , 2018, NeurIPS.
[19] Bernhard Schölkopf,et al. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations , 2018, ICML.
[20] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[21] Frank D. Wood,et al. Learning Disentangled Representations with Semi-Supervised Deep Generative Models , 2017, NIPS.
[22] Masahiro Suzuki,et al. Joint Multimodal Learning with Deep Generative Models , 2016, ICLR.
[23] N. Foo. Conceptual Spaces—The Geometry of Thought , 2022 .
[24] R. A. Brooks,et al. Intelligence without Representation , 1991, Artif. Intell..
[25] Michael I. Jordan,et al. Graphical Models, Exponential Families, and Variational Inference , 2008, Found. Trends Mach. Learn..
[26] Andrea Roli,et al. Brooks - Intelligence without representation , 2015 .
[27] Yarin Gal,et al. Understanding Measures of Uncertainty for Adversarial Example Detection , 2018, UAI.
[28] Dustin Tran,et al. Hierarchical Variational Models , 2015, ICML.
[29] Alfredo Pereira. Peter Gärdenfors, Conceptual Spaces: The Geometry of Thought , 2007, Minds and Machines.
[30] Emilien Dupont,et al. Joint-VAE: Learning Disentangled Joint Continuous and Discrete Representations , 2018, NeurIPS.
[31] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[32] Ole Winther,et al. Auxiliary Deep Generative Models , 2016, ICML.
[33] Joshua B. Tenenbaum,et al. Separating Style and Content with Bilinear Models , 2000, Neural Computation.