New Strategies for Image Annotation: Overview of the Photo Annotation Task at ImageCLEF 2010

The ImageCLEF 2010 Photo Annotation Task poses the challenge of automated annotation of 93 visual concepts in Flickr photos. The participants were provided with a training set of 8,000 Flickr im- ages including annotations, EXIF data and Flickr user tags. Testing was performed on 10,000 Flickr images, dierentiated between approaches considering solely visual information, approaches relying on textual in- formation and multi-modal approaches. Half of the ground truth was acquired with a crowdsourcing approach. The evaluation followed two evaluation paradigms: per concept and per example. In total, 17 research teams participated in the multi-label classification challenge with 63 sub- missions. Summarizing the results, the task could be solved with a MAP of 0.455 in the multi-modal configuration, with a MAP of 0.407 in the visual-only configuration and with a MAP of 0.234 in the textual con- figuration. For the evaluation per example, 0.66 F-ex and 0.66 OS-FCS could be achieved for the multi-modal configuration, 0.68 F-ex and 0.65 OS-FCS for the visual configuration and 0.26 F-ex and 0.37 OS-FCS for the textual configuration.

[1]  Hichem Sahbi,et al.  TELECOM ParisTech at ImageCLEF 2010 Photo Annotation Task: Combining Tags and Visual Features for Learning-Based Image Annotation , 2010, CLEF.

[2]  David Bodoff Test theory for evaluating reliability of IR test collections , 2008, Inf. Process. Manag..

[3]  Halina Kwasnicka,et al.  The Wroclaw University of Technology Participation at ImageCLEF 2010 Photo Annotation Track , 2010, CLEF.

[4]  Hervé Glotin,et al.  Linear SVM for new Pyramidal Multi-Level Visual only Concept Detection in CLEF 2010 Challenge , 2010, CLEF.

[5]  SZTAKI @ ImageCLEF 2010 , 2010 .

[6]  Mark J. Huiskes,et al.  The MIR flickr retrieval evaluation , 2008, MIR '08.

[7]  Rami Albatal,et al.  MRIM-LIG at ImageCLEF 2010 Visual Concept Detection and Annotation task , 2010, CLEF.

[8]  Saso Dzeroski,et al.  Detection of Visual Concepts and Annotation of Images Using Predictive Clustering Trees , 2010, CLEF.

[9]  Koen E. A. van de Sande,et al.  The University of Amsterdam's Concept Detection System at ImageCLEF 2009 , 2009, CLEF.

[10]  Gareth J. F. Jones,et al.  A Text-Based Approach to the ImageCLEF 2010 Photo Annotation Task , 2010, CLEF.

[11]  Stefanie Nowak,et al.  How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation , 2010, MIR '10.

[12]  Massih-Reza Amini,et al.  UPMC/LIP6 at ImageCLEFannotation 2010 , 2010, CLEF.

[13]  Deyuan Zhang,et al.  Random Sampling Image to Class Distance for Photo Annotation , 2010, CLEF.

[14]  Christoph Rasche,et al.  A Novel Structural-Description Approach for Image Retrieval , 2010, CLEF.

[15]  Stefanie Nowak,et al.  The effect of semantic relatedness measures on multi-label classification evaluation , 2010, CIVR '10.

[16]  Vikas Sindhwani,et al.  Data Quality from Crowdsourcing: A Study of Annotation Selection Criteria , 2009, HLT-NAACL 2009.

[17]  Gabriela Csurka,et al.  LEAR and XRCE's Participation to Visual Concept Detection Task - ImageCLEF 2010 , 2010, CLEF.

[18]  and software — performance evaluation , .

[19]  Tomohiro Takagi,et al.  Meiji University at the ImageCLEF2010 Visual Concept Detection and Annotation Task: Working notes , 2010 .

[20]  Stefanie Nowak,et al.  Overview of the CLEF 2009 Large Scale - Visual Concept Detection and Annotation Task , 2009, CLEF.

[21]  Andreas Nürnberger,et al.  Augmenting Bag-of-Words - Category Specific Features and Concept Reasoning , 2010, CLEF.

[22]  Stefanie Nowak,et al.  Performance measures for multilabel evaluation: a case study in the area of image classification , 2010, MIR '10.