Identification of Water and Fat Images in Dixon MRI Using Aggregated Patch-Based Convolutional Neural Networks

MR water-fat separation based on the Dixon method produces water and fat images that serve as an important tool for fat suppression and quantification. However, the procedure itself is not able to assign “fat/water” labels to synthesized images. Heuristic physiological assumption-based approaches and traditional image analysis methods were designed to label water/fat images. However, their robustness, in particular to different bodyparts and imaging protocols, may not satisfy the extremely high requirement in clinical applications. In this paper, we propose a highly robust method to identify water and fat images in MR Dixon imaging using convolutional neural network (CNN). Different from standard CNN-based image classification that treats the image as a whole, our method aims at learning appearance characteristics in local patches and aggregating them for global image identification. The distributed and redundant local information ensures the robustness of our approach. We design an aggregated patch-based CNN that includes two sub-networks, ProbNet and MaskNet. While the ProbNet aims at deriving a dense probability of patch-based classification, the Masknet extracts informative local patches and aggregate their output. Both sub-networks are encapsulated in a unified network structure that can be trained in an end-to-end fashion. More important, since at run-time the testing image only needs to pass our network once, our method becomes much more efficient than traditional sliding window approaches. We validate our method on 2887 pairs of Dixon water and fat images. It achieves high accuracy (99.96 %) and run-time efficiency (110 ms/volume).