A content-based similarity retrieval system for multimodal functional brain images
暂无分享,去创建一个
We have developed a similarity retrieval system for multimodal brain images. The system extracts specific features from each image and uses them to compute a pairwise similarity score for images of the same modality. Later, these scores are combined into one multimodal similarity score by computing a weighted sum of the individual scores from each modality. For fMRI images we identify the most activated regions and extract the following features from each region: the region centroid, the region area, the average activation value for all the voxels within that region, the variance of those activation values, the average distance of each voxel within that region to the region's centroid, and the variance of those distances. For the ERP images we only identify the positions of the local minima. The similarity between two images, whether fMRI or ERP, is obtained using the average summed minimum distance weighted by the inverse of the number of components from the query to the target and from the target to the query. We demonstrated that our method is sensitive to similarities in brain activation patterns from members of the same data set. From our experiments we learned that the performance of the similarity measure is greatly influenced by the combination of features used. Using a similarity matrix generated by a human expert, we were able to estimate a non-linear function that could generate a similarity score between a pair of fMRI subjects. The function's estimated values have an average correlation coefficient of 0.76 when compared to the similarity scores that the expert assigned to the same images. We also developed a MATLAB-based user interface in order to simplify the user's experience with this multimodal content-based similarity retrieval tool.