Unleashing Video Search

Video is rapidly becoming a regular part of our digital lives. However, its tremendous growth is increasing userspsila expectations that video will be as easy to search as text. Unfortunately, users are still finding it difficult to find relevant content. And todaypsilas solutions are not keeping pace on problems ranging from video search to content classification to automatic filtering. In this talk we describe recent techniques that leverage the computerpsilas ability to effectively analyze visual features of video and apply statistical machine learning techniques to classify video scenes automatically. We examine related efforts on the modeling of large video semantic spaces and review public evaluations such as TRECVID, which are greatly facilitating research and development on video retrieval. We discuss the role of MPEG-7 as a way to store metadata generated for video in a fully standards-based searchable representation. Overall, we show how these approaches together go a long way to truly unleash video search.