Storage and Retrieval for Image and Video Databases VII
暂无分享,去创建一个
Chung-Sheng Li ; John R. Smith ; Vittorio Castelli; Lawrence D. Bergman Show Abstract In this paper, the performance of similarity retrieval from a database of earth core images by using different sets of spatial and transformed-based texture features is evaluated and compared. A benchmark consisting of 69 core images from rock samples is devised for the experiments. We show that the Gabor feature set is far superior to other feature sets in terms of precision-recall for the benchmark images. This is in contrast to an earlier report by the authors in which we have observed that the spatial-based feature set outperforms the other feature sets by a wide margin for a benchmark image set consisting of satellite images when the evaluation window has to be small 32 X 32 in order to extract homogenous regions. Consequently, we conclude that optimal texture feature set for texture feature-based similarity retrieval is highly application dependent, and has to be carefully evaluated for each individual application scenario. Rodney Long ; George R. We are now working 1 to determine utility of data directly derived from the images in our databases, and 2 to investigate the feasibility of computer-assisted or automated indexing of the images to support image retrieval of images of interest to biomedical researchers in the field of osteoarthritis. To build an initial database based on image data, we are manually segmenting a subset of the vertebrae, using techniques from vertebral morphometry. From this, we will derive and add to the database vertebral features. The customized-queries approach first classifies a query using the features that best differentiate the major classes and then customizes the query to that class by using the features that best distinguish the subclasses within the chosen major class. This research is motivated by the observation that the features which are most effective in discriminating among images from different classes may not be the most effective for retrieval of visually similar images within a class. This occurs for domains in which not all pairs of images within one class have equivalent visual similarity. We apply this approach to content-based retrieval of high-resolution tomographic images of patients with lung disease and show that this approach yields The traditional approach that performs retrieval using a single feature vector yields only Chi-Ren Shyu ; T. Tony Cai; Lynn S. Broderick Show Abstract In the picture archiving and communication systems PACS used in modern hospitals, the current practice is to retrieve images based on keyword search, which returns a complete set of images from the same scan. Both diagnostically useful and negligible images in the image databases are retrieved and browsed by the physicians. In addition to the text-based search query method, queries based on image contents and image examples have been developed and integrated into existing PACS systems. Most of the content-based image retrieval CBIR systems for medical image databases are designed to retrieve images individually. However, in a database of tomographic images, it is often diagnostically more useful to simultaneously retrieve multiple images that are closely related for various reasons, such as physiological continguousness, etc. For example, high resolution computed tomography HRCT images are taken in a series of cross-sectional slices of human body. Typically, several slices are relevant for making a diagnosis, requiring a PACS system that can retrieve a contiguous sequence of slices. In this paper, we present an extension to our physician-in-the-loop CBIR system, which allows our algorithms to automatically determine the number of adjoining images to retain after certain key images are identified by the physician. Only the key images, so identified by the physician, and the other adjoining images that cohere with the key images are kept on-line for fast retrieval; the rest of the images can be discarded if so desired. This results in large reduction in the amount of storage needed for fast retrieval. Roberts Show Abstract DARWIN is a computer vision system, which helps researchers identify individual bottlenose dolphins, Tursiops truncatus, by comparing digital images of the dorsal fins of newly photographed dolphins with a database of previously identified dolphin fins. In additional to dorsal fin images, textual information containing sighting data is stored for each of the previously identified dolphins. The software uses a semiautomated process to create an approximation of the fin outline. The outline is used to formulate a