Adaptive medical image visualization based on hierarchical neural networks and intelligent decision fusion

An adaptive medical image visualization system based on a hierarchical neural network structure and intelligent decision fusion is presented. It consists of a feature generator using both histogram and spatial information computed from a medical image, a wavelet transform for compressing the feature vector, a competitive layer neural net for clustering images into different subclasses, a bi-modal linear estimator and a RBF network based nonlinear estimator for each subclass, as well as an intelligent decision fusion process to integrate estimates from both estimators. Both estimators can adapt to new types of medical images simply by training them with those images. The large training image set is hierarchically organized for efficient user interaction and effective re-mapping of the width/center settings in the training data. Adaptation capabilities are achieved by modifying the width/center values through a mapping function, which is estimated from the width/center settings of some representative images. While the RBF network based estimator performs well for images similar to those in the training set, the bi-modal linear estimator provides reasonable estimation for a wide range of images. The decision fusion step makes the final estimation of the display parameters accurate for trained images and robust for the unknown images. The algorithm has been tested on a wide range of MR images and shown satisfactory results. Although the current algorithm is very comprehensive, its execution time is kept within reasonable range.