On modality classification and its use in text-based image retrieval in medical databases

Medical databases have been a popular application field for image retrieval techniques during the last decade. More recently, much attention has been paid to the prediction of medical image modality (X-rays, MRI…) and the integration of the predicted modality into image retrieval systems. This paper addresses these two issues. On the one hand, we believe it is possible to design specific visual descriptors to determine image modality much more efficiently than the traditional image descriptors currently used for this task. We propose very light image descriptors that better describe the modality properties and show promising results. On the other hand, we present a comparison of different existing or new modality integration methods. This comprehensive study provide insights on the behavior of these models with respect to the initial classification and retrieval systems. These results can be extended to other applications with a similar framework. All the experiments presented in this work are performed using datasets provided during the 2009 and 2010 ImageCLEF medical tracks.