Domain Adaptation for Microscopy Imaging

Electron and light microscopy imaging can now deliver high-quality image stacks of neural structures. However, the amount of human annotation effort required to analyze them remains a major bottleneck. While machine learning algorithms can be used to help automate this process, they require training data, which is time-consuming to obtain manually, especially in image stacks. Furthermore, due to changing experimental conditions, successive stacks often exhibit differences that are severe enough to make it difficult to use a classifier trained for a specific one on another. This means that this tedious annotation process has to be repeated for each new stack. In this paper, we present a domain adaptation algorithm that addresses this issue by effectively leveraging labeled examples across different acquisitions and significantly reducing the annotation requirements. Our approach can handle complex, nonlinear image feature transformations and scales to large microscopy datasets that often involve high-dimensional feature spaces and large 3D data volumes. We evaluate our approach on four challenging electron and light microscopy applications that exhibit very different image modalities and where annotation is very costly. Across all applications we achieve a significant improvement over the state-of-the-art machine learning methods and demonstrate our ability to greatly reduce human annotation effort.

[1]  Paul M. Thompson,et al.  Automated Extraction of the Cortical Sulci Based on a Supervised Learning Approach , 2007, IEEE Transactions on Medical Imaging.

[2]  Pascal Fua,et al.  Automated reconstruction of tree structures using path classifiers and Mixed Integer Programming , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[3]  Ivan Marsic,et al.  Covariate Shift in Hilbert Space: A Solution via Sorrogate Kernels , 2013, ICML.

[4]  Hal Daumé,et al.  Learning Task Grouping and Overlap in Multi-task Learning , 2012, ICML.

[5]  Michael I. Jordan,et al.  Kernel independent component analysis , 2003 .

[6]  David J. Fleet,et al.  Shared Kernel Information Embedding for Discriminative Inference , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Pascal Fua,et al.  A Real-Time Deformable Detector , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[8]  Lawrence Carin,et al.  Multi-Task Learning for Classification with Dirichlet Process Priors , 2007, J. Mach. Learn. Res..

[9]  Michael I. Jordan,et al.  Kernel independent component analysis , 2003, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)..

[10]  Trevor Hastie,et al.  The Elements of Statistical Learning , 2001 .

[11]  Avishek Saha,et al.  Modeling 4D Changes in Pathological Anatomy Using Domain Adaptation: Analysis of TBI Imaging Using a Tumor Database , 2013, MBIA.

[12]  Jean-Philippe Vert,et al.  Clustered Multi-Task Learning: A Convex Formulation , 2008, NIPS.

[13]  Ya Zhang,et al.  Boosted multi-task learning , 2010, Machine Learning.

[14]  Charles A. Micchelli,et al.  Learning Multiple Tasks with Kernel Methods , 2005, J. Mach. Learn. Res..

[15]  Tong Zhang,et al.  A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data , 2005, J. Mach. Learn. Res..

[16]  Yoav Freund,et al.  A Short Introduction to Boosting , 1999 .

[17]  Pascal Fua,et al.  Non-Linear Domain Adaptation with Boosting , 2013, NIPS.

[18]  Qiang Yang,et al.  Boosting for transfer learning , 2007, ICML '07.

[19]  Hal Daumé,et al.  Bayesian Multitask Learning with Latent Hierarchies , 2009, UAI.

[20]  Rich Caruana,et al.  Multitask Learning , 1997, Machine-mediated learning.

[21]  James J. Jiang A Literature Survey on Domain Adaptation of Statistical Classifiers , 2007 .

[22]  Marleen de Bruijne,et al.  A Transfer-Learning Approach to Image Segmentation Across Scanners by Maximizing Distribution Similarity , 2013, MLMI.

[23]  Trevor Darrell,et al.  Adapting Visual Category Models to New Domains , 2010, ECCV.

[24]  Pascal Fua,et al.  Learning Context Cues for Synapse Segmentation , 2013, IEEE Transactions on Medical Imaging.

[25]  Rama Chellappa,et al.  Domain adaptation for object recognition: An unsupervised approach , 2011, 2011 International Conference on Computer Vision.

[26]  Javier DeFelipe,et al.  Espina: A Tool for the Automated Segmentation and Counting of Synapses in Large Stacks of Electron Microscopy Images , 2011, Front. Neuroanat..

[27]  Neil D. Lawrence,et al.  Ambiguity Modeling in Latent Spaces , 2008, MLMI.

[28]  Rich Caruana,et al.  An empirical comparison of supervised learning algorithms , 2006, ICML.

[29]  Jonathan Baxter,et al.  A Model of Inductive Bias Learning , 2000, J. Artif. Intell. Res..

[30]  Peter Mountney,et al.  Learning without Labeling: Domain Adaptation for Ultrasound Transducer Localization , 2013, MICCAI.

[31]  Johannes E. Schindelin,et al.  Fiji: an open-source platform for biological-image analysis , 2012, Nature Methods.

[32]  Daoqiang Zhang,et al.  Manifold Regularized Multi-Task Feature Selection for Multi-Modality Classification in Alzheimer's Disease , 2013, MICCAI.

[33]  Ullrich Köthe,et al.  Ilastik: Interactive learning and segmentation toolkit , 2011, 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro.

[34]  Jing Bai,et al.  Ranking function adaptation with boosting trees , 2011, TOIS.

[35]  Trevor Darrell,et al.  What you saw is not what you get: Domain adaptation using asymmetric kernel transforms , 2011, CVPR 2011.

[36]  Rajesh P. N. Rao,et al.  Learning Shared Latent Structure for Image Synthesis and Robotic Imitation , 2005, NIPS.

[37]  Trevor Darrell,et al.  Factorized Orthogonal Latent Spaces , 2010, AISTATS.

[38]  Murat Dundar,et al.  An Improved Multi-task Learning Approach with Applications in Medical Diagnosis , 2008, ECML/PKDD.

[39]  Timothy F. Cootes,et al.  Fully Automatic Segmentation of the Proximal Femur Using Random Forest Regression Voting , 2013, IEEE Transactions on Medical Imaging.

[40]  Hongyuan Zha,et al.  A General Boosting Method and its Application to Learning Ranking Functions for Web Search , 2007, NIPS.

[41]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[42]  Ji Zhu,et al.  Boosting as a Regularized Path to a Maximum Margin Classifier , 2004, J. Mach. Learn. Res..