Multimodal approach to measuring excitement in video

We present in this paper an approach to mimic the excitement that is evoked in a user while watching a video. The proposed approach is meant to enhance user's comfort when dealing with large amount of broadcasted digital television data reaching his home. For this reason, we deliberately avoid using any sensors placed on users: the simulation of user excitement is based here solely on cues that are available in the digital video stream, and that can be extracted by using standard audio and video processing tools as well as by observing the way video is edited. Relation between the extracted features and evoked excitement is drawn partly from psycho-physiological research and partly from analyzing the video generation practice. Our methodology is generic and can be employed broadly in the process of video abstraction and for revealing the affective characteristics of video content.

[1]  Li-Qun Xu,et al.  User-oriented affective video content analysis , 2001, Proceedings IEEE Workshop on Content-Based Access of Image and Video Libraries (CBAIVL 2001).

[2]  Noboru Babaguchi,et al.  Towards abstracting sports video by highlights , 2000, 2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532).

[3]  E. Vesterinen,et al.  Affective Computing , 2009, Encyclopedia of Biometrics.

[4]  Allan Casebier,et al.  How to read a film , 1977 .

[5]  Boon-Lock Yeo,et al.  Video content characterization and compaction for digital library applications , 1997, Electronic Imaging.

[6]  Svetha Venkatesh,et al.  Novel approach to determining tempo and dramatic story sections in motion pictures , 2000, Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101).

[7]  Baoxin Li,et al.  Event detection and summarization in American football broadcast video , 2001, IS&T/SPIE Electronic Imaging.

[8]  R. Simons,et al.  Roll ‘em!: The effects of picture motion on emotional responses , 1998 .

[9]  Fumiko Satoh,et al.  Learning personalized video highlights from detailed MPEG-7 metadata , 2002, Proceedings. International Conference on Image Processing.