Look there! Predicting where to look for motion in an active camera network

A framework is proposed that answers the following question: if a moving object is observed by one camera in a pan-tilt-zoom (PTZ) camera network, what other camera(s) might be foveated on that object within a predefined time window, and what would be the corresponding PTZ parameter settings? No calibration is assumed, and there are no restrictions on camera placement or initial parameter settings. The framework accrues a predictive model over time. To start out, the cameras follow randomized "tours" in discretized PTZ space. If a moving object is detected in the field of view of more than one camera at a particular instant or within a predefined time window, then the model is updated to record the cameras' associations and the corresponding parameter settings. As more and more moving objects are observed, the model adapts and the most frequent associations are discovered. The formulation also allows for verification of its predictions, and reinforces its correct predictions. The system is demonstrated in observing people in an office environment with a three PTZ camera network.

[1]  Tomás Svoboda,et al.  A Convenient Multicamera Self-Calibration for Virtual Environments , 2005, Presence: Teleoperators & Virtual Environments.

[2]  Mubarak Shah,et al.  Consistent Labeling of Tracked Objects in Multiple Cameras with Overlapping Fields of View , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Takashi Matsuyama,et al.  Cooperative Distributed Vision: Dynamic Integration of Visual Perception, Action, and Communication , 1999, KI.

[4]  Peter I. Corke,et al.  A tutorial on visual servo control , 1996, IEEE Trans. Robotics Autom..

[5]  Ramin Zabih,et al.  Bayesian multi-camera surveillance , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[6]  Lily Lee,et al.  Monitoring Activities from Multiple Video Streams: Establishing a Common Coordinate Frame , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Alex Pentland,et al.  Pfinder: Real-Time Tracking of the Human Body , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[8]  Trevor Darrell,et al.  Simultaneous calibration and tracking with a network of non-overlapping sensors , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[9]  Yoshiaki Shirai,et al.  Parallel scheduling of planning and action for realizing an efficient and reactive robotic system , 2002, 7th International Conference on Control, Automation, Robotics and Vision, 2002. ICARCV 2002..

[10]  C. Diehl,et al.  Scheduling an active camera to observe people , 2004, VSSN '04.

[11]  Mubarak Shah,et al.  Tracking across multiple cameras with disjoint views , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[12]  Kostas Daniilidis,et al.  Wide Area Multiple Camera Calibration and Estimation of Radial Distortion , 2004 .

[13]  Yiannis Aloimonos,et al.  Calibration of a Multicamera Network , 2003, 2003 Conference on Computer Vision and Pattern Recognition Workshop.

[14]  A. Rollett,et al.  The Monte Carlo Method , 2004 .

[15]  David B. Stewart,et al.  Real-Time Scheduling of Sensor-Based Control Systems , 1991 .

[16]  W. Eric L. Grimson,et al.  Learning Patterns of Activity Using Real-Time Tracking , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[17]  N. Metropolis,et al.  The Monte Carlo method. , 1949 .