When and How Can Deep Generative Models be Inverted?

Deep generative models (e.g. GANs and VAEs) have been developed quite extensively in recent years. Lately, there has been an increased interest in the inversion of such a model, i.e. given a (possibly corrupted) signal, we wish to recover the latent vector that generated it. Building upon sparse representation theory, we define conditions that are applicable to any inversion algorithm (gradient descent, deep encoder, etc.), under which such generative models are invertible with a unique solution. Importantly, the proposed analysis is applicable to any trained model, and does not depend on Gaussian i.i.d. weights. Furthermore, we introduce two layer-wise inversion pursuit algorithms for trained generative networks of arbitrary depth, and accompany these with recovery guarantees. Finally, we validate our theoretical results numerically and show that our method outperforms gradient descent when inverting such generators, both for clean and corrupted signals.

[1]  Michael Elad,et al.  Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization , 2003, Proceedings of the National Academy of Sciences of the United States of America.

[2]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[3]  Chinmay Hegde,et al.  Solving Linear Inverse Problems Using Gan Priors: An Algorithm with Provable Guarantees , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[4]  Yuichi Yoshida,et al.  Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.

[5]  Bolei Zhou,et al.  Seeing What a GAN Cannot Generate , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[6]  Jeffrey A. Fessler,et al.  Convolutional Analysis Operator Learning: Acceleration and Convergence , 2018, IEEE Transactions on Image Processing.

[7]  Reinhard Heckel,et al.  A Provably Convergent Scheme for Compressive Sensing Under Random Generative Priors , 2018, Journal of Fourier Analysis and Applications.

[8]  Joel A. Tropp,et al.  Just relax: convex programming methods for identifying sparse signals in noise , 2006, IEEE Transactions on Information Theory.

[9]  Alexandros G. Dimakis,et al.  Compressed Sensing using Generative Models , 2017, ICML.

[10]  Alexei A. Efros,et al.  Generative Visual Manipulation on the Natural Image Manifold , 2016, ECCV.

[11]  Michael Elad,et al.  Multi Layer Sparse Coding: the Holistic Way , 2018, SIAM J. Math. Data Sci..

[12]  Vladislav Voroninski,et al.  Phase Retrieval Under a Generative Prior , 2018, NeurIPS.

[13]  Volkan Cevher,et al.  Fast and Provable ADMM for Learning with Generative Priors , 2019, NeurIPS.

[14]  Marc Teboulle,et al.  A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..

[15]  Wen Gao,et al.  Maximal Sparsity with Deep Networks? , 2016, NIPS.

[16]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[17]  Andrei Nicolae,et al.  PLU: The Piecewise Linear Unit Activation Function , 2018, ArXiv.

[18]  Michael Elad,et al.  Adversarial Noise Attacks of Deep Learning Architectures: Stability Analysis via Sparse-Modeled Signals , 2018, Journal of Mathematical Imaging and Vision.

[19]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..

[20]  Trevor Darrell,et al.  Adversarial Feature Learning , 2016, ICLR.

[21]  Aviad Aberdam,et al.  Barycenters of Natural Images - Constrained Wasserstein Barycenters for Image Morphing , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  Alexandros G. Dimakis,et al.  Inverting Deep Generative models, One layer at a time , 2019, NeurIPS.

[23]  Michael Elad,et al.  Convolutional Neural Networks Analyzed via Convolutional Sparse Coding , 2016, J. Mach. Learn. Res..

[24]  Michael Elad,et al.  Multilayer Convolutional Sparse Modeling: Pursuit and Dictionary Learning , 2017, IEEE Transactions on Signal Processing.

[25]  Vladislav Voroninski,et al.  Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk , 2017, IEEE Transactions on Information Theory.

[26]  Michael Elad,et al.  Sparse and Redundant Representations - From Theory to Applications in Signal and Image Processing , 2010 .