A Tool Supporting Annotation and Analysis of Videos

Since the early 1990s, the Institute of Fundamental Theology of Graz University has investigated the relationship between theology and media, especially film, in the context of its research focus Theology – Aesthetics – Visual Arts – Film – Culture. It is also a member of the International Research Group Film and Theology.1 An important focus of its research has been the detailed analysis of videos with respect to religious symbols. In addition, it explores the relationship between media and society, media and the construction of reality, the rise of the religious in media society, and the body as a site for religious experiences in films. Research in this area is not only performed by Graz University but also on an international level. Several well-known academics and institutions, for instance, are Peter Horsfield, an Australian theologian, Stewart Hoover and the research center at the University of Colorado at Boulder,2 or the Center for Religion and Media at New York University. In order to effectively perform research in the area of theology and video (film), efficient video annotation tools are needed.3 The main genre of interest for institutions like the Institute of Fundamental Theology are feature films. Unfortunately, especially in this genre, practically no tools exist to support the researcher with video annotation tasks. ‘Video annotation’ in this context refers to adding high level semantic information (usually in text form) to the video stream. The advantage is that annotation makes hidden information explicit and facilitates the analysis of the video. Due to the lack of tools until now, the first research work at the institute has been performed with a simple video recorder, a big pile of paper with annotations to specific parts of the film referenced by time codes and a lot of spooling. This is very time consuming. The main reason for the lack of tools in the feature area is that this genre is of minor interest for

[1]  David Nistér,et al.  Scalable Recognition with a Vocabulary Tree , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[2]  Jiri Matas,et al.  Robust wide-baseline stereo from maximally stable extremal regions , 2004, Image Vis. Comput..