Transfer learning for AiTR: comparing deep learning to other machine learning approaches

Aided target recognition (AiTR), the problem of classifying objects from sensor data, is an important problem with applications across industry and defense. While classification algorithms continue to improve, they often require more training data than is available or they do not transfer well to settings not represented in the training set. These problems are mitigated by transfer learning (TL), where knowledge gained in a well-understood source domain is transferred to a target domain of interest. In this context, the target domain could represents a poorlylabeled dataset, a different sensor, or an altogether new set of classes to identify. While TL for classification has been an active area of machine learning (ML) research for decades, transfer learning within a deep learning framework remains a relatively new area of research. Although deep learning (DL) provides exceptional modeling flexibility and accuracy on recent real world problems, open questions remain regarding how much transfer benefit is gained by using DL versus other ML architectures. Our goal is to address this shortcoming by comparing transfer learning within a DL framework to other ML approaches across transfer tasks and datasets. Our main contributions are: 1) an empirical analysis of DL and ML algorithms on several transfer tasks and domains including gene expressions and satellite imagery, and 2) a discussion of the limitations and assumptions of TL for aided target recognition both for DL and ML in general. We close with a discussion of future directions for DL transfer.

[1]  Dacheng Tao,et al.  Bregman Divergence-Based Regularization for Transfer Subspace Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[2]  David G. Stork,et al.  Pattern Classification (2nd ed.) , 1999 .

[3]  Atsuto Maki,et al.  A systematic study of the class imbalance problem in convolutional neural networks , 2017, Neural Networks.

[4]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[5]  Nathalie Japkowicz,et al.  The class imbalance problem: A systematic study , 2002, Intell. Data Anal..

[6]  Olga Mendoza-Schrock,et al.  Diffusion Maps and Transfer Subspace Learning , 2017 .

[7]  M. Rizki,et al.  Manifold Transfer Subspace Learning ( MTSL ) for High Dimensional Data — Applications to Handwritten Digits and Health Informatics , 2017 .

[8]  Carlos D. Castillo,et al.  Generate to Adapt: Aligning Domains Using Generative Adversarial Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[9]  Stéphane Lafon,et al.  Diffusion maps , 2006 .

[10]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[11]  Shawki Areibi,et al.  Domain Adaptation Using Representation Learning for the Classification of Remote Sensing Images , 2017, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.

[12]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[13]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[14]  François Laviolette,et al.  Domain-Adversarial Training of Neural Networks , 2015, J. Mach. Learn. Res..

[15]  Yaroslav Bulatov,et al.  xView: Objects in Context in Overhead Imagery , 2018, ArXiv.

[16]  Victor S. Lempitsky,et al.  Unsupervised Domain Adaptation by Backpropagation , 2014, ICML.

[17]  Kate Saenko,et al.  Synthetic to Real Adaptation with Deep Generative Correlation Alignment Networks , 2017, ArXiv.

[18]  Seetha Hari,et al.  Learning From Imbalanced Data , 2019, Advances in Computer and Electrical Engineering.