Enriching audio-visual chat with conversation-based image retrieval and display
暂无分享,去创建一个
This paper presents the results of a user study carried out to evaluate an application prototype in which an audio-visual chat conversation between two users is augmented by pictures related to the topics of that conversation. The prototype analyses the conversation and deducts the topic of conversation by means of a keyword tree, augmented by an ontology. Then it retrieves pictures from Flickr based on this topic, after which the pictures are shown to the users. This mechanism is called conversation-based image retrieval. 15 participants were recruited for this user study; the duration of one session was approximately 30 minutes. Eye tracking and questionnaires were used to evaluate participants' experiences. We found that participants value the use of pictures to augment an audio-visual chat application. Furthermore, participants claimed they would use it in a social context: talking to family, friends and acquaintances. One significant improvement over the prototype would be to use their own pictures (personal user-generated content) instead of just random pictures.
[1] Greg Linden,et al. Amazon . com Recommendations Item-to-Item Collaborative Filtering , 2001 .
[2] Filip De Turck,et al. Interest based selection of user generated content for rich communication services , 2010, J. Netw. Comput. Appl..
[3] S. Lloyd. The digital universe , 2008 .
[4] G. Trogemann,et al. CITIZEN MEDIA – Technological and Social Challenges of User Driven Media , 2006 .
[5] Saul Greenberg,et al. Prototyping an intelligent agent through Wizard of Oz , 1993, INTERCHI.