Multi-Similarity Based Multi-Source Transfer Learning and Its Applications

—In this paper, a novel multi-source transfer learning method based on multi-similarity ((MS)TL) is proposed. First, we measure the similarities between domains at two levels, i.e., “domain-domain” and “sample-domain”. With the multisimilarities, (MS)TL can explore more accurate relationship between the source domains and the target domain. Then, the knowledge of the source domains is transferred to the target based on the smoothness assumption, which enforces the requirement that the target classifier shares similar decision values with the relevant source classifiers on the unlabeled target samples. (MS)TL can increase the chance of finding the sources closely related to the target to reduce the “negative transfer” and also imports more knowledge from multiple sources for the target learning. Furthermore, (MS)TL only needs the pre-learned source classifiers when training the target classifier, which is suitable for large datasets. We also employ a sparsity-regularizer based on the ε-insensitive loss to enforce the sparsity of the target classifier with the support vectors only from the target domain such that the label prediction on any test sample is very fast. We also use the ε-insensitive loss function to enforce the sparsity of the decision function for fast label prediction. Validation of (MS)TL is performed with toy and real-life datasets. Experimental results demonstrate that (MS)TL can more effectively and stably enhance the learning performance. Finally, (MS)TL is also applied to the communication specific emitter identification task and the result is also satisfying.

[1]  Hans-Peter Kriegel,et al.  Integrating structured biological data by Kernel Maximum Mean Discrepancy , 2006, ISMB.

[2]  Ivor W. Tsang,et al.  Large-Scale Sparsified Manifold Regularization , 2006, NIPS.

[3]  Dong Xu,et al.  Exploiting web images for event recognition in consumer videos: A multiple source domain adaptation approach , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[4]  Trevor Darrell,et al.  DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition , 2013, ICML.

[5]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[6]  Gunnar Rätsch,et al.  An introduction to kernel-based learning algorithms , 2001, IEEE Trans. Neural Networks.

[7]  Yan Liu,et al.  A general framework for scalable transductive transfer learning , 2013, Knowledge and Information Systems.

[8]  Chengqi Zhang,et al.  TrGraph: Cross-Network Transfer Learning via Common Signature Subgraphs , 2015, IEEE Trans. Knowl. Data Eng..

[9]  Shiliang Sun,et al.  A survey of multi-source domain adaptation , 2015, Inf. Fusion.

[10]  Trevor Darrell,et al.  Adapting Visual Category Models to New Domains , 2010, ECCV.

[11]  G. Griffin,et al.  Caltech-256 Object Category Dataset , 2007 .

[12]  Gunnar Rätsch,et al.  An Empirical Analysis of Domain Adaptation Algorithms for Genomic Sequence Analysis , 2008, NIPS.

[13]  Ivor W. Tsang,et al.  This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Domain Adaptation from Multiple Sources: A Domain- , 2022 .

[14]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[15]  Rong Yan,et al.  Cross-domain video concept detection using adaptive svms , 2007, ACM Multimedia.

[16]  Chih-Min Lin,et al.  Emitter identification of electronic intelligence system using type-2 fuzzy classifier , 2014 .

[17]  Mikhail Belkin,et al.  Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples , 2006, J. Mach. Learn. Res..

[18]  Sethuraman Panchanathan,et al.  A Two-Stage Weighting Framework for Multi-Source Domain Adaptation , 2011, NIPS.

[19]  Luis Alfonso Maeda-Nunez,et al.  Learning Transfer-Based Adaptive Energy Minimization in Embedded Systems , 2016, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[20]  Daoqiang Zhang,et al.  Multimodal manifold-regularized transfer learning for MCI conversion prediction , 2015, Brain Imaging and Behavior.

[21]  Johan A. K. Suykens,et al.  Benchmarking Least Squares Support Vector Machine Classifiers , 2004, Machine Learning.

[22]  Masashi Sugiyama,et al.  Multi-Task Learning via Conic Programming , 2007, NIPS.