Coordinating camera motion for sensing uncertainty reduction
暂无分享,去创建一个
Consider an example: covering a shopping mall using surveillance cameras. Suppose a volume V1 is to be covered. Given a camera that can effectively cover a volume V2, we need approximately V1/V2 cameras. However, in typical applications, the entire volume V1 may not be of interest but only small portions of it (such as spaces containing human faces, for face recognition) may be required to be imaged. This volume is a very small fraction, p, of V1, reducing the number of cameras required to pV1/V2. This reduction is possible only if the cameras can be adaptively moved to orient towards the interesting regions. In many applications, detecting regions of interest, such as humans, is possible using a low resolution image (such as detecting motion for detecting potentially interesting regions where humans may be present) or alternative forms of sensors. We develop methods which use this low resolution side information to adaptively control the motion of deployed cameras such that a smaller number of cameras, pV1/v2 can achieve the effective coverage of V1/V2 cameras.
[1] Takeo Kanade,et al. Algorithms for cooperative multisensor surveillance , 2001, Proc. IEEE.
[2] Frank Zhao,et al. Distributed Attention for Large Video Sensor Networks , 2004 .