UPMC/LIP6 at ImageCLEFphoto 2009

This working note 1 describes the LIP6 runs for the ImageCLEF photo task 2009. Text retrieval is based on Okapi. Visual retrieval is based on HSV histograms. As we think that text modality should be more efficient than content basedimage retrieval, we use a non symmetric late fusion between text ranks and visual ranks. Finally, we apply two diversity methods based on visual clustering and random permutation. Results show that visual clustering gives better results than text retrieval only when few text information is given.