Tracking in Multiple Cameras with Disjoint Views
暂无分享,去创建一个
In most cases, it is not possible for a single camera to observe the complete area of interest because sensor resolution is finite and structures in the scene limit the visible areas. Thus, multiple cameras are required to observe large environments. Even, in this case, it is usually not possible to completely cover large areas with cameras. Therefore in realistic scenarios, surveillance of wide areas requires a system with the ability to track objects while observing them through multiple cameras with non-overlapping field of views. Moreover, it is preferable that the tracking approach does not require camera calibration or complete site modelling since the luxury of calibrated cameras or site models is not available in most situations. Also, maintaining calibration between a large network of sensors is a daunting task, since a slight change in the position of a sensor will require the calibration process to be repeated. In this chapter, we present an algorithm that caters for all these constraints and tracks people across multiple un-calibrated cameras with non-overlapping field of views. The task of a multi-camera tracker is to establish correspondence between observations across cameras. Multi-camera tracking specially across non-overlapping views is a challenging problem because of two reasons.