Organizing Videos Streams for Clustering and Estimation of Popular Scenes

The huge diffusion of mobile devices with embedded cameras has opened new challenges in the context of the automatic understanding of video streams acquired by multiple users during events, such as sport matches, expos, concerts. Among the other goals there is the interpretation of which visual contents are the most relevant and popular (i.e., where users look). The popularity of a visual content is an important cue exploitable in several fields that include the estimation of the mood of the crowds attending to an event, the estimation of the interest of parts of a cultural heritage, etc. In live social events people capture and share videos which are related to the event. The popularity of a visual content can be obtained through the “visual consensus” among multiple video streams acquired by the different users devices. In this paper we address the problem of detecting and summarizing the “popular scenes” captured by users with a mobile camera during events. For this purpose, we have developed a framework called RECfusion in which the key popular scenes of multiple streams are identified over time. The proposed system is able to generate a video which captures the interests of the crowd starting from a set of the videos by considering scene content popularity. The frames composing the final popular video are automatically selected from the different video streams by considering the scene recorded by the highest number of users’ devices (i.e., the most popular scene).

[1]  Wei Tsang Ooi,et al.  MoViMash: online mobile video mashup , 2012, ACM Multimedia.

[2]  Giovanni Maria Farinella,et al.  Representing scenes for real-time context classification on mobile devices , 2015, Pattern Recognit..

[3]  Alan C. Bovik,et al.  No-Reference Image Quality Assessment in the Spatial Domain , 2012, IEEE Transactions on Image Processing.

[4]  Shmuel Peleg,et al.  Wisdom of the Crowd in Egocentric Video Curation , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops.

[5]  Yaser Sheikh,et al.  3D Social Saliency from Head-mounted Cameras , 2012, NIPS.

[6]  Yaser Sheikh,et al.  Automatic editing of footage from multiple social cameras , 2014, ACM Trans. Graph..

[7]  Andrea Cavallaro,et al.  ViComp: composition of user-generated videos , 2015, Multimedia Tools and Applications.

[8]  Yiannis Aloimonos,et al.  Deformation and Viewpoint Invariant Color Histograms , 2006, BMVC.

[9]  Graham D. Finlayson,et al.  Colour indexing across devices and viewing conditions , 2001 .

[10]  Giovanni Maria Farinella,et al.  Scene classification in compressed and constrained domain , 2011 .

[11]  A. Doria RECfusion : Automatic Video Curation Driven by Visual Content Popularity , 2016 .

[12]  Matthijs C. Dorst Distinctive Image Features from Scale-Invariant Keypoints , 2011 .

[13]  Marc Pollefeys,et al.  Unstructured video-based rendering: interactive exploration of casually captured videos , 2010, SIGGRAPH 2010.

[14]  Sebastiano Battiato,et al.  Natural scenes classification for color enhancement , 2005, IEEE Transactions on Consumer Electronics.

[15]  Akio Nagasaka,et al.  Real-Time Video Mosaics Using Luminance-Projection Correlation , 1999 .

[16]  Sebastiano Battiato,et al.  RECfusion: Automatic Scene Clustering and Tracking in Videos from Multiple Sources , 2016 .

[17]  Gerald Schaefer,et al.  Illuminant and device invariant colour using histogram equalisation , 2005, Pattern Recognit..