Just-in-time personalized video presentations

Using high-quality video cameras on mobile devices, it is relatively easy to capture a significant volume of video content for community events such as local concerts or sporting events. A more difficult problem is selecting and sequencing individual media fragments that meet the personal interests of a viewer of such content. In this paper, we consider an infrastructure that supports the just-in-time delivery of personalized content. Based on user profiles and interests, tailored video mash-ups can be created at view-time and then further tailored to user interests via simple end-user interaction. Unlike other mash-up research, our system focuses on client-side compilation based on personal (rather than aggregate) interests. This paper concentrates on a discussion of language and infrastructure issues required to support just-in-time video composition and delivery. Using a high school concert as an example, we provide a set of requirements for dynamic content delivery. We then provide an architecture and infrastructure that meets these requirements. We conclude with a technical and user analysis of the just-in-time personalized video approach.

[1]  Simon J. Thompson,et al.  Behavioral reactivity and real time programming in XML: functional programming meets SMIL animation , 2004, DocEng '04.

[2]  Bo Gao,et al.  Beyond the playlist: seamless playback of structured video clips , 2010, IEEE Transactions on Consumer Electronics.

[3]  Patrick Schmitz,et al.  Community annotation and remix: a research platform and pilot deployment , 2006, HCM '06.

[4]  Arjeh M. Cohen,et al.  Synchronized Multimedia Integration Language (SMIL) 2.0 , 1998 .

[5]  Pablo César,et al.  A model for editing operations on active temporal multimedia documents , 2010, DocEng '10.

[6]  Marc Cavazza,et al.  Generating story variants with constrained video recombination , 2011, MM '11.

[7]  Bo Gao,et al.  Accurate and low-delay seeking within and across mash-ups of highly-compressed videos , 2011, NOSSDAV.

[8]  Vivian Genaro Motti,et al.  A social approach to authoring media annotations , 2010, DocEng '10.

[9]  Jiebo Luo,et al.  Reliving on demand: a total viewer experience , 2011, MM '11.

[10]  Rogério Ferreira Rodrigues,et al.  Live editing of hypermedia documents , 2006, DocEng '06.

[11]  Vincent Quint,et al.  Timesheets.js: when SMIL meets HTML5 and CSS3 , 2011, DocEng '11.

[12]  Luiz Fernando Gomes Soares,et al.  Relating declarative hypermedia objects and imperative objects through the NCL glue language , 2009, DocEng '09.

[13]  Silvia Pfeiffer The Definitive Guide to HTML5 Video , 2010 .

[14]  Pablo César,et al.  Creating personalized memories from social events: community-based support for multi-camera recordings of school concerts , 2011, ACM Multimedia.

[15]  Michael Bieber,et al.  Supporting virtual documents in just-in-time hypermedia systems , 2004, DocEng '04.

[16]  Lynda Hardman,et al.  Automatic generation of matter-of-opinion video documentaries , 2008, J. Web Semant..

[17]  Dick C. A. Bulterman,et al.  Enabling adaptive time-based web applications with SMIL state , 2008, DocEng '08.

[18]  Pablo César,et al.  Automatic generation of video narratives from shared UGC , 2011, HT '11.

[19]  Peter H. N. de With,et al.  Automatic mashup generation from multiple-camera concert recordings , 2010, ACM Multimedia.

[20]  Yannick Prié,et al.  Component-based hypervideo model: high-level operational specification of hypervideos , 2011, DocEng '11.

[21]  Mor Naaman,et al.  Less talk, more rock: automated organization of community-contributed collections of concert videos , 2009, WWW '09.