暂无分享,去创建一个
Adam M. Oberman | Adam Oberman | Vikram Voleti | Vikram S. Voleti | Chris Finlay | Christopher Pal | Chris Finlay | C. Pal
[1] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[2] Thomas G. Dietterich,et al. Deep Anomaly Detection with Outlier Exposure , 2018, ICLR.
[3] Elyas Sabeti,et al. Data Discovery and Anomaly Detection Using Atypicality: Theory , 2017, IEEE Transactions on Information Theory.
[4] Surya Ganguli,et al. Deep Unsupervised Learning using Nonequilibrium Thermodynamics , 2015, ICML.
[5] E. Tabak,et al. A Family of Nonparametric Density Estimation Algorithms , 2013 .
[6] Jascha Sohl-Dickstein,et al. Invertible Convolutional Flow , 2019, NeurIPS.
[7] Prafulla Dhariwal,et al. Glow: Generative Flow with Invertible 1x1 Convolutions , 2018, NeurIPS.
[8] Alex Graves,et al. Conditional Image Generation with PixelCNN Decoders , 2016, NIPS.
[9] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[10] Stéphane Mallat,et al. A Wavelet Tour of Signal Processing - The Sparse Way, 3rd Edition , 2008 .
[11] Ashish Khetan,et al. PacGAN: The Power of Two Samples in Generative Adversarial Networks , 2017, IEEE Journal on Selected Areas in Information Theory.
[12] Philip H. S. Torr,et al. STEER : Simple Temporal Regularization For Neural ODEs , 2020, NeurIPS.
[13] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[14] Tali Dekel,et al. SinGAN: Learning a Generative Model From a Single Natural Image , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[15] Jakub M. Tomczak,et al. The Convolution Exponential and Generalized Sylvester Flows , 2020, NeurIPS.
[16] Pieter Abbeel,et al. PixelSNAIL: An Improved Autoregressive Generative Model , 2017, ICML.
[17] Ali Razavi,et al. Generating Diverse High-Fidelity Images with VQ-VAE-2 , 2019, NeurIPS.
[18] Sergio Gomez Colmenarejo,et al. Parallel Multiscale Autoregressive Density Estimation , 2017, ICML.
[19] Pascal Vincent,et al. A Closer Look at the Optimization Landscapes of Generative Adversarial Networks , 2019, ICLR.
[20] Ilya Sutskever,et al. Generating Long Sequences with Sparse Transformers , 2019, ArXiv.
[21] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[22] Jaakko Lehtinen,et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.
[23] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[24] P. Alam. ‘A’ , 2021, Composites Engineering: An A–Z Guide.
[25] R. Srikant,et al. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks , 2017, ICLR.
[26] Aaas News,et al. Book Reviews , 1893, Buffalo Medical and Surgical Journal.
[27] Stefano Ermon,et al. Improved Techniques for Training Score-Based Generative Models , 2020, NeurIPS.
[28] Trevor Darrell,et al. Fully Convolutional Networks for Semantic Segmentation , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[29] Yang Song,et al. Generative Modeling by Estimating Gradients of the Data Distribution , 2019, NeurIPS.
[30] Eric Nalisnick,et al. Normalizing Flows for Probabilistic Modeling and Inference , 2019, J. Mach. Learn. Res..
[31] Jan Kautz,et al. NVAE: A Deep Hierarchical Variational Autoencoder , 2020, NeurIPS.
[32] David Duvenaud,et al. Neural Ordinary Differential Equations , 2018, NeurIPS.
[33] Pieter Abbeel,et al. Denoising Diffusion Probabilistic Models , 2020, NeurIPS.
[34] Matthew J. Johnson,et al. Learning Differential Equations that are Easy to Solve , 2020, NeurIPS.
[35] Ivan Grubisic,et al. Densely connected normalizing flows , 2021, NeurIPS.
[36] Yee Whye Teh,et al. Detecting Out-of-Distribution Inputs to Deep Generative Models Using a Test for Typicality , 2019, ArXiv.
[37] Matthias Bethge,et al. A note on the evaluation of generative models , 2015, ICLR.
[38] W. Hager,et al. and s , 2019, Shallow Water Hydraulics.
[39] Koray Kavukcuoglu,et al. Pixel Recurrent Neural Networks , 2016, ICML.
[40] Rob Fergus,et al. Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks , 2015, NIPS.
[41] Edward H. Adelson,et al. PYRAMID METHODS IN IMAGE PROCESSING. , 1984 .
[42] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[43] Yang Song,et al. MintNet: Building Invertible Neural Networks with Masked Convolutions , 2019, NeurIPS.
[44] Iain Murray,et al. Neural Spline Flows , 2019, NeurIPS.
[45] R. Weale. Vision. A Computational Investigation Into the Human Representation and Processing of Visual Information. David Marr , 1983 .
[46] Adam M. Oberman,et al. How to Train Your Neural ODE: the World of Jacobian and Kinetic Regularization , 2020, ICML.
[47] Mi-Yen Yeh,et al. Accelerating Continuous Normalizing Flow with Trajectory Polynomial Regularization , 2020, AAAI.
[48] P. Burt. Fast filter transform for image processing , 1981 .
[49] Oliver Wang,et al. MSG-GAN: Multi-Scale Gradients for Generative Adversarial Networks , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[50] Danna Zhou,et al. d. , 1840, Microbial pathogenesis.
[51] David Duvenaud,et al. FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models , 2018, ICLR.
[52] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.
[53] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[54] Stefano Ermon,et al. Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models , 2017, AAAI.
[55] Kibok Lee,et al. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.
[56] Jordi Luque,et al. Input complexity and out-of-distribution detection with likelihood-based generative models , 2020, ICLR.
[57] Razvan Pascanu,et al. On the difficulty of training recurrent neural networks , 2012, ICML.
[58] Xingjian Li,et al. OT-Flow: Fast and Accurate Continuous Normalizing Flows via Optimal Transport , 2020, ArXiv.
[59] Mark Chen,et al. Distribution Augmentation for Generative Modeling , 2020, ICML.
[60] Jon Sneyers,et al. FLIF: Free lossless image format based on MANIAC compression , 2016, 2016 IEEE International Conference on Image Processing (ICIP).
[61] Jun Zhu,et al. VFlow: More Expressive Generative Flows with Variational Data Augmentation , 2020, ICML.
[62] Anders Høst-Madsen,et al. Data Discovery and Anomaly Detection Using Atypicality for Real-Valued Data , 2019, Entropy.
[63] W. Marsden. I and J , 2012 .
[64] Sergey Levine,et al. Stochastic Adversarial Video Prediction , 2018, ArXiv.
[65] Shakir Mohamed,et al. Variational Inference with Normalizing Flows , 2015, ICML.
[66] Samy Bengio,et al. Density estimation using Real NVP , 2016, ICLR.
[67] Abhishek Kumar,et al. Score-Based Generative Modeling through Stochastic Differential Equations , 2020, ICLR.
[68] Max Welling,et al. Emerging Convolutions for Generative Normalizing Flows , 2019, ICML.
[69] Konstantinos G. Derpanis,et al. Wavelet Flow: Fast Training of High Resolution Normalizing Flows , 2020, NeurIPS.
[70] Yee Whye Teh,et al. Do Deep Generative Models Know What They Don't Know? , 2018, ICLR.
[71] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[72] P. Alam. ‘G’ , 2021, Composites Engineering: An A–Z Guide.
[73] Ligang Liu,et al. Generative Flows with Matrix Exponential , 2020, ICML.
[74] David Duvenaud,et al. Invertible Residual Networks , 2018, ICML.
[75] Kevin Barraclough,et al. I and i , 2001, BMJ : British Medical Journal.
[76] R Devon Hjelm,et al. On Adversarial Mixup Resynthesis , 2019, NeurIPS.
[77] Andrew Gordon Wilson,et al. Why Normalizing Flows Fail to Detect Out-of-Distribution Data , 2020, NeurIPS.
[78] Aaron C. Courville,et al. Augmented Normalizing Flows: Bridging the Gap Between Generative Flows and Latent Variable Models , 2020, ArXiv.
[79] Max Welling,et al. Improved Variational Inference with Inverse Autoregressive Flow , 2016, NIPS 2016.
[80] Edward H. Adelson,et al. The Laplacian Pyramid as a Compact Image Code , 1983, IEEE Trans. Commun..
[81] P. Alam,et al. R , 1823, The Herodotus Encyclopedia.
[82] Emiel Hoogeboom,et al. Integer Discrete Flows and Lossless Compression , 2019, NeurIPS.
[83] Eduard H. Hovy,et al. MaCow: Masked Convolutional Generative Flow , 2019, NeurIPS.
[84] Ole Winther,et al. Closing the Dequantization Gap: PixelCNN as a Single-Layer Flow , 2020, NeurIPS.
[85] Léon Bottou,et al. Towards Principled Methods for Training Generative Adversarial Networks , 2017, ICLR.
[86] Tony Lindeberg,et al. Scale-Space for Discrete Signals , 1990, IEEE Trans. Pattern Anal. Mach. Intell..
[87] Tadeusz Styś,et al. A discrete maximum principle , 1981 .
[88] David Duvenaud,et al. Residual Flows for Invertible Generative Modeling , 2019, NeurIPS.
[89] Stéphane Mallat,et al. A Theory for Multiresolution Signal Decomposition: The Wavelet Representation , 1989, IEEE Trans. Pattern Anal. Mach. Intell..
[90] Andrew P. Witkin,et al. Scale-Space Filtering , 1983, IJCAI.
[91] Vincent Y. F. Tan,et al. On Robustness of Neural Ordinary Differential Equations , 2020, ICLR.
[92] Jaakko Lehtinen,et al. Analyzing and Improving the Image Quality of StyleGAN , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[93] Ivan Kobyzev,et al. Normalizing Flows: An Introduction and Review of Current Methods , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[94] Navdeep Jaitly,et al. Adversarial Autoencoders , 2015, ArXiv.
[95] Dustin Tran,et al. Image Transformer , 2018, ICML.
[96] Nal Kalchbrenner,et al. Generating High Fidelity Images with Subscale Pixel Networks and Multidimensional Upscaling , 2018, ICLR.
[97] R. Stephenson. A and V , 1962, The British journal of ophthalmology.
[98] Ioannis Mitliagkas,et al. Adversarial score matching and improved sampling for image generation , 2020, ICLR.
[99] Jiaming Song,et al. Denoising Diffusion Implicit Models , 2021, ICLR.
[100] Pieter Abbeel,et al. Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design , 2019, ICML.
[101] R. Sarpong,et al. Bio-inspired synthesis of xishacorenes A, B, and C, and a new congener from fuscol† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c9sc02572c , 2019, Chemical science.
[102] Tim Salimans,et al. Axial Attention in Multidimensional Transformers , 2019, ArXiv.
[103] You Lu,et al. Woodbury Transformations for Deep Generative Flows , 2020, NeurIPS.
[104] Alexander A. Alemi,et al. WAIC, but Why? Generative Ensembles for Robust Anomaly Detection , 2018 .