Invariant Representations without Adversarial Training

Representations of data that are invariant to changes in specified factors are useful for a wide range of problems: removing potential biases in prediction problems, controlling the effects of covariates, and disentangling meaningful factors of variation. Unfortunately, learning representations that exhibit invariance to arbitrary nuisance factors yet remain useful for other tasks is challenging. Existing approaches cast the trade-off between task performance and invariance in an adversarial way, using an iterative minimax optimization. We show that adversarial training is unnecessary and sometimes counter-productive; we instead cast invariant representation learning as a single information-theoretic objective that can be directly optimized. We demonstrate that this approach matches or exceeds performance of state-of-the-art adversarial approaches for learning fair representations and for generative modeling with controllable transformations.

[1]  Alexander A. Alemi,et al.  Deep Variational Information Bottleneck , 2017, ICLR.

[2]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[3]  Toon Calders,et al.  Classifying without discriminating , 2009, 2009 2nd International Conference on Computer, Control and Communication.

[4]  Guillaume Lample,et al.  Fader Networks: Manipulating Images by Sliding Attributes , 2017, NIPS.

[5]  Stefano Soatto,et al.  Visual Representations: Defining Properties and Deep Approximations , 2014, ICLR 2016.

[6]  Naftali Tishby,et al.  The information bottleneck method , 2000, ArXiv.

[7]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[8]  Pietro Perona,et al.  Overcomplete steerable pyramid filters and rotation invariance , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[9]  Stefano Soatto,et al.  Emergence of invariance and disentangling in deep representations , 2017 .

[10]  Stefano Ermon,et al.  Learning Controllable Fair Representations , 2018, AISTATS.

[11]  Ragini Verma,et al.  Harmonization of multi-site diffusion tensor imaging data , 2017, NeuroImage.

[12]  Graham Neubig,et al.  Controllable Invariance through Adversarial Feature Learning , 2017, NIPS.

[13]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[14]  Stephen M. Smith,et al.  ICA-based artifact removal diminishes scan site differences in multi-center resting-state fMRI , 2015, Front. Neurosci..

[15]  Greg Ver Steeg,et al.  Anchored Correlation Explanation: Topic Modeling with Minimal Domain Knowledge , 2016, TACL.

[16]  Robert P. Freckleton,et al.  On the misuse of residuals in ecology: regression of residuals vs. multiple regression , 2002 .

[17]  Max Welling,et al.  Group Equivariant Convolutional Networks , 2016, ICML.

[18]  Stefano Soatto,et al.  Information Dropout: Learning Optimal Representations Through Noisy Computation , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[20]  Edward H. Adelson,et al.  The Design and Use of Steerable Filters , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[21]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[22]  G. Barrie Wetherill,et al.  Random Effects Models , 1981 .

[23]  Max Welling,et al.  The Variational Fair Autoencoder , 2015, ICLR.