Instance-based Transfer Learning for Multi-source Domains

The most remarkable characteristic of transfer learning is that it can employ the knowledge in relative domains to help perform the learning tasks in the domain of the target. With the use of different fields of knowledge for target task learning, transfer learning can transfer and share the information between similar domains or tasks, making the traditional learning from scratch an addable one, which implies that the learning efficiency is higher and the cost is lower.For the specific situation that the shared knowledge in the domains of the source and the target are sample data with similar distribution, an instance transfer learning method based on multi-sources dynamic TrAdaBoost is put forward.Integrated with the knowledge in multiple source domains, this method makes the target task learning the one that is able to make good use of the information of all source domains. Whenever candidate classifiers are trained, all the samples in all source domains are involved in learning, and the information conducive to target task learning can be obtained,so that negative transfer can be avoided. The theoretical analysis suggests that the given algorithm is better than the single source transfer. By means of adding the dynamic factor, this algorithm improves the defect that weight entropy drifts from source to target instances. The experimental results support that the given algorithm has the advantage of improving the recognition rate.