A Transfer-Learning Approach to Image Segmentation Across Scanners by Maximizing Distribution Similarity

Many successful methods for biomedical image segmentation are based on supervised learning, where a segmentation algorithm is trained based on manually labeled training data. For supervised-learning algorithms to perform well, this training data has to be representative for the target data. In practice however, due to differences between scanners such representative training data is often not available. We therefore present a segmentation algorithm in which labeled training data does not necessarily need to be representative for the target data, which allows for the use of training data from different studies than the target data. The algorithm assigns an importance weight to all training images, in such a way that the Kullback-Leibler divergence between the resulting distribution of the training data and the distribution of the target data is minimized. In a set of experiments on MRI brain-tissue segmentation with training and target data from four substantially different studies our method improved mean classification errors with up to 25% compared to common supervised-learning approaches.