Isomap based Self-taught transfer Learning for Image classification

Machine learning tasks of interest, such as classification, recognition etc., require labeled data which is often very difficult to obtain. Unlabeled data is relatively easier to obtain compared to labeled data. Self-taught transfer Learning is one such method in which higher-level features or bases are extracted from the huge amount of unlabeled data in source domain and further these features are used with labeled data in target domain to perform the task of interest. We have introduced novel method, using isomap, for generation of features or bases in source domain, which used in target domain to perform the classification with small amount of labeled data. Proposed method is compared with sparse coding and results are promising and efficient.