An increasing number of people own and use camcorders to make videos that capture their experiences and document their lives. These videos easily add up to many hours of material. Oddly, most of them are put into a storage box and never touched or watched again. The reasons for this are manifold. Firstly, the raw video material is unedited, and is therefore long-winded and lacking visually appealing effects. Video editing would help, but, it is still too time-consuming; people rarely find the time to do it. Secondly, watching the same tape more than a few times can be boring, since the video lacks any variation or surprise during playback. Automatic video abstracting algorithms can provide a method for processing videos so that users will want to play the material more often. However, existing automatic abstracting algorithms have been designed for feature films, newscasts or documentaries, and thus are inappropriate for home video material and raw video footage in general. In this paper, we present new algorithms for generating amusing, visually appealing and variable video abstracts of home video material automatically. They make use of a new, empirically motivated approach, also presented in the paper, to cluster time-stamped shots hierarchically into meaningful units. Last but not least, we propose a simple and natural extension of the way people acquire video - so-called on-the-fly annotations - which will allow a completely new set of applications on raw video footage as well as enable better and more selective automatic video abstracts. Moreover, our algorithms are not restricted to home video but can also be applied to raw video footage in general.
[1]
Boon-Lock Yeo,et al.
Extracting story units from long programs for video browsing and navigation
,
1996,
Proceedings of the Third IEEE International Conference on Multimedia Computing and Systems.
[2]
M. Smith,et al.
Video Skimming for Quick Browsing based on Audio and Image Characterization
,
1995
.
[3]
David S. Doermann,et al.
Automatic identification of text in digital video key frames
,
1998,
Proceedings. Fourteenth International Conference on Pattern Recognition (Cat. No.98EX170).
[4]
Boon-Lock Yeo,et al.
Time-constrained clustering for segmentation of video into story units
,
1996,
Proceedings of 13th International Conference on Pattern Recognition.
[5]
David Bordwell,et al.
Film Art: An Introduction
,
1979
.
[6]
Wolfgang Effelsberg,et al.
Automatic text segmentation and text recognition for video indexing
,
2000,
Multimedia Systems.
[7]
S. Haykin,et al.
Adaptive Filter Theory
,
1986
.
[8]
Stephen W. Smoliar,et al.
Content-based video browsing tools
,
1995,
Electronic Imaging.
[9]
Rainer Lienhart,et al.
Automatic text recognition for video indexing
,
1997,
MULTIMEDIA '96.
[10]
Yukinobu Taniguchi,et al.
PanoramaExcerpts: extracting and packing panoramas for video browsing
,
1997,
MULTIMEDIA '97.