With rapid advances in video processing technologies and ever fast increments in network bandwidth, the popularity of video content publishing and sharing has made similarity search an indispensable operation to retrieve videos of user interests. The video similarity is usually measured by the percentage of similar frames shared by two video sequences, and each frame is typically represented as a high-dimensional feature vector. Unfortunately, high complexity of video content has posed the following major challenges for fast retrieval: (a) effective and compact video representations, (b) efficient similarity measurements, and (c) efficient indexing on the compact representations. In this paper, we propose a number of methods to achieve fast similarity search for very large video database. First, each video sequence is summarized into a small number of clusters, each of which contains similar frames and is represented by a novel compact model called Video Triplet (ViTri). ViTri models a cluster as a tightly bounded hypersphere described by its position, radius, and density. The ViTri similarity is measured by the volume of intersection between two hyperspheres multiplying the minimal density, i.e., the estimated number of similar frames shared by two clusters. The total number of similar frames is then estimated to derive the overall similarity between two video sequences. Hence the time complexity of video similarity measure can be reduced greatly. To further reduce the number of similarity computations on ViTris, we introduce a new one dimensional transformation technique which rotates and shifts the original axis system using PCA in such a way that the original inter-distance between two high-dimensional vectors can be maximally retained after mapping. An efficient B+-tree is then built on the transformed one dimensional values of ViTris' positions. Such a transformation enables B+-tree to achieve its optimal performance by quickly filtering a large portion of non-similar ViTris. Our extensive experiments on real large video datasets prove the effectiveness of our proposals that outperform existing methods significantly.
[1]
Hans-Peter Kriegel,et al.
The pyramid-technique: towards breaking the curse of dimensionality
,
1998,
SIGMOD '98.
[2]
Sang Uk Lee,et al.
Efficient video indexing scheme for content-based retrieval
,
1999,
IEEE Trans. Circuits Syst. Video Technol..
[3]
Giridharan Iyengar,et al.
Distributional clustering for efficient content-based retrieval of images and video
,
2000,
Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101).
[4]
Sharad Mehrotra,et al.
Local Dimensionality Reduction: A New Approach to Indexing High Dimensional Spaces
,
2000,
VLDB.
[5]
Christian Böhm,et al.
Searching in high-dimensional spaces: Index structures for improving the performance of multimedia databases
,
2001,
CSUR.
[6]
Nuno Vasconcelos,et al.
On the complexity of probabilistic image retrieval
,
2001,
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.
[7]
Philip S. Yu,et al.
Effective nearest neighbor indexing with the euclidean metric
,
2001,
CIKM '01.
[8]
Milind R. Naphade,et al.
Multimodal pattern matching for audio-visual query and retrieval
,
2001,
IS&T/SPIE Electronic Imaging.
[9]
Beng Chin Ooi,et al.
Indexing the Distance: An Efficient Method to KNN Processing
,
2001,
VLDB.
[10]
Avideh Zakhor,et al.
Efficient video similarity measurement with video signature
,
2002,
Proceedings. International Conference on Image Processing.
[11]
Beng Chin Ooi,et al.
An adaptive and efficient dimensionality reduction algorithm for high-dimensional indexing
,
2003,
Proceedings 19th International Conference on Data Engineering (Cat. No.03CH37405).
[12]
Beng Chin Ooi,et al.
Query and Update Efficient B+-Tree Based Indexing of Moving Objects
,
2004,
VLDB.