MOVING VEHICLE TRACKING USING DISJOINT-VIEW MULTICAMERAS

Multicamera vehicle tracking is a necessary part of any video-based intelligent transportation system for extracting different traffic parameters; such as link travel times and origin/destination counts. In many applications, it is needed to locate traffic cameras disjoint from each other to cover a wide area. This paper presents a method for tracking moving vehicles in such camera networks. The proposed method introduces a new method for handling inter-object occlusions; as the most challenging part of the single camera tracking phase. This approach is based on coding the silhouette of moving objects before and after occlusion and separating occluded vehicles by computing the longest common substring of the related chain codes. In addition, to improve the accuracy of the tracking method, in the multicamera phase, a new feature based on the relationships among surrounding vehicles is introduced. The proposed feature is modeled by an exponential distribution and can efficiently improve the efficiency of the appearance (space-time) features when they cannot discriminate between correspondent and non-correspondent vehicles; due to noise or dynamic condition of traffic scenes. A graph-based approach is then used to track vehicles in the camera network. Experimental results show the efficiency of the proposed method.

[1]  Ishwar K. Sethi,et al.  Feature Point Correspondence in the Presence of Occlusion , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Massimo Piccardi,et al.  Tracking people across disjoint camera views by an illumination-tolerant appearance representation , 2007, Machine Vision and Applications.

[3]  Mubarak Shah,et al.  Modeling inter-camera space-time and appearance relationships for tracking across non-overlapping views , 2008, Comput. Vis. Image Underst..

[4]  Adrian Hilton,et al.  A survey of advances in vision-based human motion capture and analysis , 2006, Comput. Vis. Image Underst..

[5]  Roy L. Streit,et al.  Maximum likelihood method for probabilistic multihypothesis tracking , 1994, Defense, Security, and Sensing.

[6]  Cor J. Veenman,et al.  Resolving Motion Correspondence for Densely Moving Points , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Sharath Pankanti,et al.  Appearance models for occlusion handling , 2006, Image Vis. Comput..

[8]  Jake K. Aggarwal,et al.  Temporal spatio-velocity transform and its application to tracking and interaction , 2004, Comput. Vis. Image Underst..

[9]  Tieniu Tan,et al.  A survey on visual surveillance of object motion and behaviors , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[10]  Dorin Comaniciu,et al.  Kernel-Based Object Tracking , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  Stuart J. Russell,et al.  Object identification in a Bayesian context , 1997, IJCAI 1997.

[12]  Mohan M. Trivedi,et al.  A Survey of Vision-Based Trajectory Learning and Analysis for Surveillance , 2008, IEEE Transactions on Circuits and Systems for Video Technology.

[13]  Guillermo Sapiro,et al.  Morphing Active Contours , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[14]  Michalis E. Zervakis,et al.  A survey of video processing techniques for traffic applications , 2003, Image Vis. Comput..