Content identification and search in visual lifelogs
暂无分享,去创建一个
Over recent years we have developed technologies we can now use to analyse, index, browse and search visual content, including still image and moving video. We've used metadata captured when the content was created, we've used social tagging and other user-generated content, and we've used within-frame information including colour, shape and texture. In the vast majority of applications of searching and identifying things in visual content we've used still images, mostly photos, and we've used video, including movies, TV news, other TV content, home movies and surveillance video. In the vast majority of these applications the end-goal has been to search for clips, or to summarise long video segments into something shorter. In some of our recent work we have been working with visual lifelogs, images captured from wearable cameras, both still images and moving video. This video has different challenges to some of the other video genres, but we can make good progress in applying content-based techniques to lifelog video, when the application is search. However, when we go beyond search, or summarisation, as the application then we can achieve a surprising amount of progress and reveal quite deep insights when analysing visual lifelog. This presentation will present our work on this topic.