Large-scale image/videos' automatic annotation and retrieval based on the distributed users

In this paper, a mechanism of large-scale image/videos (I/Vs) automatic annotation and retrieval based on the distributed users is proposed, and a prototype system on the basis of it is implemented by the .net framework. Furthermore, the algorithm of annotating the meanings of I/Vs by their associative values with predefined categories, and the method of implementing the system is described. Experiment results regarding the evaluation that users can find their expecting I/Vs show that the proposed approach is effective and efficient for annotating and retrieving large I/Vs.

[1]  Ying Dai Representing Meanings of Images Based on Associative Values with Lexicons , 2009, 2009 Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing.

[2]  Ying Dai Class-based Image Representation for Kansei Retrieval Considering Semantic Tolerance Relation , 2009 .

[3]  Jiebo Luo,et al.  Beyond pixels: Exploiting camera metadata for photo classification , 2005, Pattern Recognit..

[4]  Bernt Schiele,et al.  Performance prediction for vocabulary-supported image retrieval , 2001, Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205).

[5]  Charles A. Bouman,et al.  Perceptual image similarity experiments , 1998, Electronic Imaging.

[6]  Aleksandra Mojsilovic,et al.  Semantic-Friendly Indexing and Quering of Images Based on the Extraction of the Objective Semantic Cues , 2004, International Journal of Computer Vision.

[7]  BART KOSKO,et al.  Bidirectional associative memories , 1988, IEEE Trans. Syst. Man Cybern..

[8]  Gustavo Carneiro,et al.  Supervised Learning of Semantic Classes for Image Annotation and Retrieval , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.