The FRAMES processing model for the synthesis of dynamic virtual video sequences

The FRAMES project within the RDN CRC (Cooperative Research Centre for Research Data Networks) is developing an experimental environment for video content based retrieval and dynamic virtual video synthesis from archives of video data. The paper describes the FRAMES dynamic virtual video synthesis process. The generation of dynamic virtual videos is based upon content descriptions of archived material, together with a specification of the videos that are to be created. Several different types of queries that refer to semantic features can be embedded within a specification; the execution of these queries determines the specific video content that will be displayed. The paper begins by describing the semantic descriptions that FRAMES uses, based upon a multi level model of video semantics. The paper then describes the query types and associated execution processes used to generate dynamic virtual videos based upon prescriptions and content descriptions.