Automated boundary tracing using temporal information
暂无分享,去创建一个
There has been much effort on automating the detection of LV boundary as the left ventricular angiography has been recognized as an important tool for diagnosis of the heart disease. LV boundary extraction is a complex problem because of the wide variation in image contrast caused by structures other than the dye-filled ventricle, the incomplete mixing of the dye in the ventricle, and the broad range in ventricular shape, size, and orientation. Most of techniques have focused on extracting the boundary based on information within a single frame, intra-frame information, such as shape and gray scale gradient from a single frame. It is proper to use gray scale gradient information to detect edge from an image but this is not the case for left ventriculogram (LVG) in which the boundary location does not always have a high gradient and sometimes does not have any gradient at all. Observing how the trained technicians trace the LV boundary tells us that the information among frames, inter-frame information, and its propagation are essential for manual tracing of LV boundary.
We formulate the LV boundary detection problem as a series of two estimation problems: a Bayesian classification from multiple images followed by a region estimation with motion constraints. Availability of a database consisting of left ventriculograms with hand-drawn boundaries makes it possible to train the system to classify the image, pixel by pixel, based on the gray scale distribution of each location throughout the cardiac cycle. From the class labels, LV region at all images are determined simultaneously. Initially estimated LV region by the classifier is refined by the region estimator which includes propagation of information in one image to another image under the motion constraints between two images. Morphological operations represent the motion constraints by specifying the inner bound and the outer bound of the region.