GAP: Quantifying the Generative Adversarial Set and Class Feature Applicability of Deep Neural Networks

Recent work in deep neural networks has sought to characterize the nature in which a network learns features and how applicable learnt features are to various problem sets. Deep neural network applicability can be split into three sub-problems; set applicability, class applicability, and instance applicability. In this work we seek to quantify the applicability of features learned during adversarial training, focusing specifically on set and class applicability. We apply techniques for measuring applicability to both generators and discriminators trained on various data sets to quantify applicability.

[1]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[2]  Pascal Vincent,et al.  Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Yiqiang Chen,et al.  Deep Transfer Learning for Cross-domain Activity Recognition , 2018, ICCSE'18.

[4]  Supratik Mukhopadhyay,et al.  DeepSat: a learning framework for satellite imagery , 2015, SIGSPATIAL/GIS.

[5]  Qun Liu,et al.  DeepSat V2: feature augmented convolutional neural nets for satellite image classification , 2019, Remote Sensing Letters.

[6]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[7]  Nicolas Courty,et al.  Optimal Transport for Multi-source Domain Adaptation under Target Shift , 2018, AISTATS.

[8]  Motoaki Kawanabe,et al.  Direct Importance Estimation with Model Selection and Its Application to Covariate Shift Adaptation , 2007, NIPS.

[9]  Michael Bernico,et al.  Investigating the Impact of Data Volume and Domain Similarity on Transfer Learning Applications , 2017, Proceedings of the Future Technologies Conference (FTC) 2018.

[10]  Bogdan Raducanu,et al.  Transferring GANs: generating images from limited data , 2018, ECCV.

[11]  Trevor Darrell,et al.  Deep Domain Confusion: Maximizing for Domain Invariance , 2014, CVPR 2014.

[12]  Sepp Hochreiter,et al.  GANs Trained by a Two Time-Scale Update Rule Converge to a Nash Equilibrium , 2017, ArXiv.

[13]  Andrea Vedaldi,et al.  Understanding deep image representations by inverting them , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Fabio Maria Carlucci,et al.  AutoDIAL: Automatic Domain Alignment Layers , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[15]  Qiang Yang,et al.  Understanding How Feature Structure Transfers in Transfer Learning , 2017, IJCAI.

[16]  Mario Lucic,et al.  Are GANs Created Equal? A Large-Scale Study , 2017, NeurIPS.

[17]  Michael I. Jordan,et al.  Learning Transferable Features with Deep Adaptation Networks , 2015, ICML.

[18]  Supratik Mukhopadhyay,et al.  CactusNets: Layer Applicability as a Metric for Transfer Learning , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).

[19]  D. Dowson,et al.  The Fréchet distance between multivariate normal distributions , 1982 .

[20]  Qiang Yang,et al.  Translated Learning: Transfer Learning across Different Feature Spaces , 2008, NIPS.

[21]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.

[22]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[23]  Philip S. Yu,et al.  Deep Learning of Transferable Representation for Scalable Domain Adaptation , 2016, IEEE Transactions on Knowledge and Data Engineering.

[24]  Neil D. Lawrence,et al.  Dataset Shift in Machine Learning , 2009 .

[25]  Aaron C. Courville,et al.  Improved Training of Wasserstein GANs , 2017, NIPS.

[26]  Léon Bottou,et al.  Wasserstein GAN , 2017, ArXiv.

[27]  Supratik Mukhopadhyay,et al.  Progressively Growing Generative Adversarial Networks for High Resolution Semantic Segmentation of Satellite Images , 2018, 2018 IEEE International Conference on Data Mining Workshops (ICDMW).

[28]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Jan Kautz,et al.  High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[30]  Yoshua Bengio,et al.  How transferable are features in deep neural networks? , 2014, NIPS.

[31]  Ivor W. Tsang,et al.  Domain Transfer Multiple Kernel Learning , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[32]  Michael I. Jordan,et al.  Deep Transfer Learning with Joint Adaptation Networks , 2016, ICML.

[33]  Supratik Mukhopadhyay,et al.  PCGAN-CHAR: Progressively Trained Classifier Generative Adversarial Networks for Classification of Noisy Handwritten Bangla Characters , 2019, ICADL.

[34]  Yoshua Bengio,et al.  Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach , 2011, ICML.

[35]  Trevor Darrell,et al.  Simultaneous Deep Transfer Across Domains and Tasks , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[36]  Qiang Yang,et al.  Heterogeneous Transfer Learning for Image Classification , 2011, AAAI.