SmartPlayer: user-centric video fast-forwarding

In this paper we propose a new video interaction model called adaptive fast-forwarding to help people quickly browse videos with predefined semantic rules. This model is designed around the metaphor of scenic car driving, in which the driver slows down near areas of interest and speeds through unexciting areas. Results from a preliminary user study of our video player suggest the following: (1) the player should adaptively adjust the current playback speed based on the complexity of the present scene and predefined semantic events; (2) the player should learn user preferences about predefined event types as well as a suitable playback speed; (3) the player should fast-forward the video continuously with a playback rate acceptable to the user to avoid missing any undefined events or areas of interest. Furthermore, our user study results suggest that for certain types of video, our SmartPlayer yields better user experiences in browsing and fast-forwarding videos than existing video players' interaction models.

[1]  Takeo Igarashi,et al.  Speed-dependent automatic zooming for browsing large documents , 2000, UIST '00.

[2]  Wei-Ta Chu,et al.  Event detection in tennis matches based on video data mining , 2008, 2008 IEEE International Conference on Multimedia and Expo.

[3]  Ba Tu Truong,et al.  Video abstraction: A systematic review and classification , 2007, TOMCCAP.

[4]  Anoop Gupta,et al.  Browsing digital video , 2000, CHI.

[5]  Tao Mei,et al.  Video collage , 2007, ACM Multimedia.

[6]  Jeffrey Heer,et al.  Scented Widgets: Improving Navigation Cues with Embedded Visualizations , 2007, IEEE Transactions on Visualization and Computer Graphics.

[7]  Shih-Fu Chang,et al.  Video skims: taxonomies and an optimal generation framework , 2002, Proceedings. International Conference on Image Processing.

[8]  Steven K. Feiner,et al.  Content-aware scrolling , 2006, UIST.

[9]  Toshiyuki Masui,et al.  Elastic graphical interfaces to precise data manipulation , 1995, CHI 95 Conference Companion.

[10]  Rainer Lienhart,et al.  Comparison of automatic shot boundary detection algorithms , 1998, Electronic Imaging.

[11]  Wei-Ta Chu,et al.  Action movies segmentation and summarization based on tempo analysis , 2004, MIR '04.

[12]  Ajay Divakaran,et al.  Constant pace skimming and temporal sub-sampling of video using motion activity , 2001, Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205).

[13]  Shingo Uchihashi,et al.  Video Manga: generating semantically meaningful video summaries , 1999, MULTIMEDIA '99.

[14]  Pierre Dragicevic,et al.  Video browsing by direct manipulation , 2008, CHI.

[15]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[16]  Wolfgang Hürst,et al.  Advanced user interfaces for dynamic video browsing , 2004, MULTIMEDIA '04.

[17]  John P. Oakley,et al.  Storage and Retrieval for Image and Video Databases , 1993 .

[18]  Wen-Nung Lie,et al.  Video Summarization Based on Semantic Feature Analysis and User Preference , 2008, 2008 IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing (sutc 2008).

[19]  Ajay Divakaran,et al.  An extended framework for adaptive playback-based video summarization , 2003, SPIE ITCom.

[20]  Wei-Ta Chu,et al.  Explicit semantic events detection and development of realistic applications for broadcasting baseball videos , 2008, Multimedia Tools and Applications.

[21]  Andreas Girgensohn,et al.  Stained-glass visualization for highly condensed video summaries , 2004, 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763).

[22]  Wen-Huang Cheng,et al.  Semantic Analysis for Automatic Event Recognition and Segmentation of Wedding Ceremony Videos , 2008, IEEE Transactions on Circuits and Systems for Video Technology.