Joint cross-domain classification and subspace learning

Domain adaptation aims at adapting a prediction function trained on a source domain, for a new different but related target domain. Recently several subspace learning methods have proposed adaptive solutions in the unsupervised case, where no labeled data are available for the target. Most of the attention has been dedicated to searching a new low-dimensional domain-invariant representation, leaving the definition of the prediction function to a second stage. Here we propose to learn both jointly. Specifically we learn the source subspace that best matches the target subspace while at the same time minimizing a regularized misclassification loss. We provide an alternating optimization technique based on stochastic sub-gradient descent to solve the learning problem and we demonstrate its performance on several domain adaptation tasks.