A Scalable Architecture for Cross-Modal Semantic Annotation and Retrieval

Even within constrained domains like medicine there are no truly generic methods for automatic image parsing and annotation. Despite the fact that the precision and sophistication of image understanding methods have improved to cope with the increasing amount and complexity of the data, the improvements have not resulted in more flexible or generic image understanding techniques. Instead, the analysis methods are object specific and modality dependent. Consequently, current image search techniques are still dependent on the manual and subjective association of keywords to images for retrieval. Manually annotating the vast numbers of images which are generated and archived in the medical practice is not an option.