Composing Normalizing Flows for Inverse Problems

Given an inverse problem with a normalizing flow prior, we wish to estimate the distribution of the underlying signal conditioned on the observations. We approach this problem as a task of conditional inference on the pre-trained unconditional flow model. We first establish that this is computationally hard for a large class of flow models. Motivated by this, we propose a framework for approximate inference that estimates the target conditional as a composition of two flow models. This formulation leads to a stable variational inference training procedure that avoids adversarial training. Our method is evaluated on a variety of inverse problems and is shown to produce highquality samples with uncertainty quantification. We further demonstrate that our approach can be amortized for zero-shot inference.

[1]  Daniel Cohen-Or,et al.  Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  V. Tarokh,et al.  Projected Latent Markov Chain Monte Carlo: Conditional Sampling of Normalizing Flows , 2020, ICLR.

[3]  Erik Nijkamp,et al.  Learning Energy-based Model with Flow-based Backbone by Neural Transport MCMC , 2020, ArXiv.

[4]  Alexandros G. Dimakis,et al.  Deep Learning Techniques for Inverse Problems in Imaging , 2020, IEEE Journal on Selected Areas in Information Theory.

[5]  A. Dimakis,et al.  Compressed Sensing with Invertible Generative Models and Dependent Noise , 2020, ArXiv.

[6]  C. Rudin,et al.  PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Eric Nalisnick,et al.  Normalizing Flows for Probabilistic Modeling and Inference , 2019, J. Mach. Learn. Res..

[8]  M. T. Young,et al.  Challenges in Bayesian inference via Markov chain Monte Carlo for neural networks , 2019, ArXiv.

[9]  Junier B. Oliva,et al.  Flow Models for Arbitrary Conditional Likelihoods , 2019, ICML.

[10]  Zhaoqiang Liu,et al.  Information-Theoretic Lower Bounds for Compressive Sensing With Generative Models , 2019, IEEE Journal on Selected Areas in Information Theory.

[11]  Bangti Jin,et al.  Probabilistic Residual Learning for Aleatoric Uncertainty in Image Restoration , 2019, ArXiv.

[12]  Yang Song,et al.  Generative Modeling by Estimating Gradients of the Data Distribution , 2019, NeurIPS.

[13]  Ullrich Köthe,et al.  Guided Image Generation with Conditional Invertible Neural Networks , 2019, ArXiv.

[14]  Iain Murray,et al.  Neural Spline Flows , 2019, NeurIPS.

[15]  Avishek Joey Bose,et al.  Improving Exploration in Soft-Actor-Critic with Normalizing Flows Policies , 2019, ArXiv.

[16]  Ali Ahmed,et al.  Invertible generative models for inverse problems: mitigating representation error and dataset bias , 2019, ICML.

[17]  Yu Sun,et al.  Block Coordinate Regularization by Denoising , 2019, IEEE Transactions on Computational Imaging.

[18]  Dmitry Vetrov,et al.  Semi-Conditional Normalizing Flows for Semi-Supervised Learning , 2019, ArXiv.

[19]  J. Adler,et al.  Deep posterior sampling: Uncertainty quantification for large scale inverse problems , 2019 .

[20]  R. Murray-Smith,et al.  Variational Inference for Computational Imaging Inverse Problems , 2019, J. Mach. Learn. Res..

[21]  Jianfei Cai,et al.  Pluralistic Image Completion , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  Joshua V. Dillon,et al.  NeuTra-lizing Bad Geometry in Hamiltonian Monte Carlo Using Neural Transport , 2019, 1903.03704.

[23]  Sundeep Rangan,et al.  Asymptotics of MAP Inference in Deep Networks , 2019, 2019 IEEE International Symposium on Information Theory (ISIT).

[24]  Yuqi Li,et al.  GAN-Based Projector for Faster Recovery With Convergence Guarantees in Linear Inverse Problems , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[25]  Pieter Abbeel,et al.  Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design , 2019, ICML.

[26]  Yann LeCun,et al.  Learning about an exponential amount of conditional distributions , 2019, NeurIPS.

[27]  Stefano Ermon,et al.  Uncertainty Autoencoders: Learning Compressed Representations via Variational Information Maximization , 2018, AISTATS.

[28]  Jonas Adler,et al.  Deep Bayesian Inversion , 2018, ArXiv.

[29]  Jens Behrmann,et al.  Invertible Residual Networks , 2018, ICML.

[30]  Patrick Gallinari,et al.  Unsupervised Adversarial Image Reconstruction , 2018, ICLR.

[31]  Reinhard Heckel,et al.  Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks , 2018, ICLR.

[32]  Prafulla Dhariwal,et al.  Glow: Generative Flow with Invertible 1x1 Convolutions , 2018, NeurIPS.

[33]  Dmitry Vetrov,et al.  Variational Autoencoder with Arbitrary Conditioning , 2018, ICLR.

[34]  Morteza Mardani,et al.  Neural Proximal Gradient Descent for Compressive Imaging , 2018, NeurIPS.

[35]  Dustin G. Mixon,et al.  SUNLayer: Stable denoising with generative networks , 2018, ArXiv.

[36]  Chinmay Hegde,et al.  Solving Linear Inverse Problems Using Gan Priors: An Algorithm with Provable Guarantees , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[37]  Rama Chellappa,et al.  Task-Aware Compressed Sensing with Generative Adversarial Networks , 2018, AAAI.

[38]  Thomas S. Huang,et al.  Generative Image Inpainting with Contextual Attention , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[39]  Alexei A. Efros,et al.  The Unreasonable Effectiveness of Deep Features as a Perceptual Metric , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[40]  Aggelos K. Katsaggelos,et al.  Using Deep Neural Networks for Inverse Problems in Imaging: Beyond Analytical Methods , 2018, IEEE Signal Processing Magazine.

[41]  David Duvenaud,et al.  Inference Suboptimality in Variational Autoencoders , 2018, ICML.

[42]  Heiga Zen,et al.  Parallel WaveNet: Fast High-Fidelity Speech Synthesis , 2017, ICML.

[43]  Adam Roberts,et al.  Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models , 2017, ICLR.

[44]  Sepp Hochreiter,et al.  GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.

[45]  Charles A. Sutton,et al.  VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning , 2017, NIPS.

[46]  Alexandros G. Dimakis,et al.  Compressed Sensing using Generative Models , 2017, ICML.

[47]  David Pfau,et al.  Unrolled Generative Adversarial Networks , 2016, ICLR.

[48]  Max Welling,et al.  Improved Variational Inference with Inverse Autoregressive Flow , 2016, NIPS 2016.

[49]  Wojciech Zaremba,et al.  Improved Techniques for Training GANs , 2016, NIPS.

[50]  Samy Bengio,et al.  Density estimation using Real NVP , 2016, ICLR.

[51]  Alexei A. Efros,et al.  Context Encoders: Feature Learning by Inpainting , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[52]  David M. Blei,et al.  Variational Inference: A Review for Statisticians , 2016, ArXiv.

[53]  Honglak Lee,et al.  Learning Structured Output Representation using Deep Conditional Generative Models , 2015, NIPS.

[54]  Shakir Mohamed,et al.  Variational Inference with Normalizing Flows , 2015, ICML.

[55]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[56]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[57]  Yoshua Bengio,et al.  NICE: Non-linear Independent Components Estimation , 2014, ICLR.

[58]  Diederik P. Kingma,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[59]  Yee Whye Teh,et al.  Bayesian Learning via Stochastic Gradient Langevin Dynamics , 2011, ICML.

[60]  Radford M. Neal MCMC using Hamiltonian dynamics , 2012, 1206.1901.

[61]  Michael I. Jordan,et al.  Graphical Models, Exponential Families, and Variational Inference , 2008, Found. Trends Mach. Learn..

[62]  Proceedings of SPIE,et al.  Compressive Sensing , 2009 .

[63]  P. Bickel,et al.  SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR , 2008, 0801.1095.

[64]  E. Candès,et al.  Stable signal recovery from incomplete and inaccurate measurements , 2005, math/0503066.

[65]  Michael I. Jordan,et al.  An Introduction to Variational Methods for Graphical Models , 1999, Machine Learning.

[66]  Richard G. Baraniuk,et al.  A Data-Driven and Distributed Approach to Sparse Signal Representation and Recovery , 2019, International Conference on Learning Representations.

[67]  Gitta Kutyniok Compressed Sensing , 2012 .

[68]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .