Viz-A-Vis: Toward Visualizing Video through Computer Vision

In the established procedural model of information visualization, the first operation is to transform raw data into data tables. The transforms typically include abstractions that aggregate and segment relevant data and are usually defined by a human, user or programmer. The theme of this paper is that for video, data transforms should be supported by low level computer vision. High level reasoning still resides in the human analyst, while part of the low level perception is handled by the computer. To illustrate this approach, we present Viz-A-Vis, an overhead video capture and access system for activity analysis in natural settings over variable periods of time. Overhead video provides rich opportunities for long-term behavioral and occupancy analysis, but it poses considerable challenges. We present initial steps addressing two challenges. First, overhead video generates overwhelmingly large volumes of video impractical to analyze manually. Second, automatic video analysis remains an open problem for computer vision.

[1]  Alex Pentland,et al.  Human Computing and Machine Understanding of Human Behavior: A Survey , 2007, Artifical Intelligence for Human Computing.

[2]  A F Bobick,et al.  Movement, activity and action: the role of knowledge in the perception of motion. , 1997, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[3]  Leonard McMillan,et al.  Proscenium: a framework for spatio-temporal video editing , 2003, ACM Multimedia.

[4]  James W. Davis,et al.  Real-time recognition of activity using temporal templates , 1996, Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96.

[5]  William Wright,et al.  GeoTime Information Visualization , 2004 .

[6]  Mario Romero,et al.  Alien presence in the home: the design of Tableau Machine , 2007, Personal and Ubiquitous Computing.

[7]  Adam Finkelstein,et al.  Stylized video cubes , 2002, SCA '02.

[8]  Justin Plantier,et al.  Prototyping of Interactive Satellite Image Analysis Tools Using a Real-Time Data-Flow Computer , 1995, ICIAP.

[9]  Sidney S. Fels,et al.  Techniques for interactive video cubism (poster session) , 2000, ACM Multimedia.

[10]  James M. Rehg,et al.  Shadow Elimination and Blinding Light Suppression for Interactive Projected Displays , 2007, IEEE Trans. Vis. Comput. Graph..

[11]  Jonathon S. Hare,et al.  Mind the gap: another look at the problem of the semantic gap in image retrieval , 2006, Electronic Imaging.

[12]  Gregory D. Abowd,et al.  The Aware Home: A Living Laboratory for Ubiquitous Computing Research , 1999, CoBuild.

[13]  S. Barley Images of Imaging: Notes on Doing Longitudinal Field Work , 1990 .

[14]  Christopher Richard Wren,et al.  Visualizing the History of Living Spaces , 2007, IEEE Transactions on Visualization and Computer Graphics.

[15]  Min Chen,et al.  Video visualization , 2003 .

[16]  Gregory D. Abowd,et al.  Privacy and proportionality: adapting legal evaluation techniques to inform design in ubiquitous computing , 2005, CHI.

[17]  Andrew D. Wilson Robust computer vision-based detection of pinching for one and two-handed gesture input , 2006, UIST.

[18]  Diane Gromala,et al.  Making space for time in time-lapse photography , 2004, SIGGRAPH '04.

[19]  Junta de Regulación,et al.  " " > , 2007 .

[20]  Moncef Gabbouj,et al.  MUVIS: a content-based multimedia indexing and retrieval framework , 2003, Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings..

[21]  Gregory D. Abowd,et al.  Who, What, When, Where, How: Design Issues of Capture & Access Applications , 2001, UbiComp.

[22]  Ba Tu Truong,et al.  Video abstraction: A systematic review and classification , 2007, TOMCCAP.

[23]  Abigail Sellen,et al.  Design for Privacy in Ubiquitous Computing Environments , 1993, ECSCW.

[24]  Carman Neustaedter,et al.  Blur filtration fails to preserve privacy for home-based video conferencing , 2006, TCHI.

[25]  Gillian R. Hayes Documenting and understanding everyday activities through the selective archiving of live experiences , 2006, CHI Extended Abstracts.