Margin-Based Transfer Learning

To achieve good generalization in supervised learning, the training and testing examples are usually required to be drawn from the same source distribution. However, in many cases, this identical distribution assumption might be violated when a task from one new domain(target domain) comes, while there are only labeled data from a similar old domain(auxiliary domain). Labeling the new data can be costly and it would also be a waste to throw away all the old data. In this paper, we present a discriminative approach that utilizes the intrinsic geometry of input patterns revealed by unlabeled data points and derive a maximum-margin formulation of unsupervised transfer learning. Two alternative solutions are proposed to solve the problem. Experimental results on many real data sets demonstrate the effectiveness and the potential of the proposed methods.