Toward Spatial Queries for Spatial Surveillance Tasks
暂无分享,去创建一个
Surveillance systems are largely focused on the movement, storage, and review of video and audio streams. The recent shift from human monitoring toward automated interpretation presages a fundamental change in our relationship with surveillance systems. Despite this shift, the state of the art has so far remained trapped by the notion of a sensor stream. That is, the systems being sold today still largely constrain their analysis tools to operate on a single input stream. Some research systems have tried to present video streams in context: superimposed on a floor plan. Some allow searches for salient people or objects across video streams. We present here a technique for generating queries that are embedded in context. We allow the operator to specify queries that take advantage of the spatial context, by utilizing spatial gestures to assemble the query terms on a map of the site. We show an early prototype system operating on data from a reasearch facility observed by a heterogeneous network of sensors.
[1] Supun Samarasekera,et al. Video Flashlights: Real Time Rendering of Multiple Videosfor Immersive Model Visualization , 2002, Rendering Techniques.
[2] M. Studdert-Kennedy. Hand and Mind: What Gestures Reveal About Thought. , 1994 .
[3] Richard A. Bolt,et al. “Put-that-there”: Voice and gesture at the graphics interface , 1980, SIGGRAPH '80.