The most significant constituent for a realistic blood flow simulation is the precise extraction of the patient specific geometry from digital computer tomographic images. The reconstructed data sets are not always sufficient to extract the contours/surfaces of the large aortic structures to build the computational model directly. The raw images are processed with the help of low level filters to remove in-homogeneity and undesired noise to achieve a smoother contour of the desired structures close to the original image data-set. A spatial filter, based on patch size, is applied iteratively on an image. It will be shown that with large patch sizes, distortion increases and with smaller patch sizes, a smooth contour is not achieved. A new, adaptive approach based on the image gradient which provides intrinsic information of the underlying objects in an image is presented. This new approach helps to apply the filters locally in a controlled manner to achieve the desired smooth contour with minimum distortion.
[1]
Uwe Küster,et al.
Semi-Automatic Segmentation and Analysis of Vascular Structures in CT Data
,
2015
.
[2]
Walter Gander,et al.
Algorithms for the QR-Decomposition
,
2003
.
[3]
Jitendra Malik,et al.
Scale-Space and Edge Detection Using Anisotropic Diffusion
,
1990,
IEEE Trans. Pattern Anal. Mach. Intell..
[4]
Wilhelm Burger,et al.
Digital Image Processing - An Algorithmic Introduction using Java
,
2008,
Texts in Computer Science.
[5]
Jiangtao Xu,et al.
An improved anisotropic diffusion filter with semi-adaptive threshold for edge preservation
,
2016,
Signal Process..