Enriching audio-visual chat with conversation-based image retrieval and display

This paper presents the results of a user study carried out to evaluate an application prototype in which an audio-visual chat conversation between two users is augmented by pictures related to the topics of that conversation. The prototype analyses the conversation and deducts the topic of conversation by means of a keyword tree, augmented by an ontology. Then it retrieves pictures from Flickr based on this topic, after which the pictures are shown to the users. This mechanism is called conversation-based image retrieval. 15 participants were recruited for this user study; the duration of one session was approximately 30 minutes. Eye tracking and questionnaires were used to evaluate participants' experiences. We found that participants value the use of pictures to augment an audio-visual chat application. Furthermore, participants claimed they would use it in a social context: talking to family, friends and acquaintances. One significant improvement over the prototype would be to use their own pictures (personal user-generated content) instead of just random pictures.