I-012 Linking Trajectories of a Moving Object on Multiple Cameras with Different Angles and Locations
暂无分享,去创建一个
Nowadays, digital personal video recording devices are everywhere, such as in handycam, compact digital camera, or even in mobile phone. With the popularity increase of social networking sites, e.g., Youtube and Facebook, people do not hesitate to share their recorded video to family members, friends, or even the public through these sites. To facilitate this, some cameras are even equipped with wireless link so that the users can easily share or upload the video directly to any video sharing site. Therefore, abundant number of personal home videos can now be easily accessed by everyone over the Internet. From those videos, it is possible that someone wants to search for redundant recorded video, which was taken at the same place in the same time, that feature the same person that he or she would like to see. For example, in the festivals or sports events, many people make a video recording of one or some interesting scenes so that their cameras will shoot at the same object, but from different position and angle. We propose an unsupervised mechanism to correspond the trajectories of a person in multiple videos that are recorded at the same place in the same time by using different multiple cameras independently. The correspondence of the person can be analyzed by using the location and shooting direction information from camera. Modern cameras, including smartphones, have been equipped with Global Positioning System (GPS) and digital compass features so that the required information can be obtained from those features. We assume that the path of the tracked person must be linear. The proposed object correspondence algorithm then estimates the direction of that path. Since every camera records the object from difference position and angle, the direction of the person seen by the camera is different. Hence, depending on the camera used as reference, the direction information from all cameras have to be calibrated. Based on the object’s path and its calibrated direction information, the position of the object in the recorded image, and the estimated size of the object on the recorded image, we can link the trajectories among those videos and find the corresponding object in all other videos. Unlike the other proposals, e.g., [1][2][3][4], our algorithm works on compressed domain since most digital video cameras give the output in compressed format, particularly H.264/AVC format. We use the algorithm proposed in [5] to detect and track moving objects and then link the trajectory created by the object on every camera. This paper is organized as follows. Section 2 describes the previous work on this field. Section 3 explains our algorithm. Section 4 shows our experiment results, and Section 5 concludes our work.
[1] Takashi Matsuyama,et al. Real-time cooperative multi-target tracking by communicating active vision agents , 2005, Comput. Vis. Image Underst..
[2] Mubarak Shah,et al. Tracking across multiple cameras with disjoint views , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.
[3] Ikegame Yukihisa,et al. Probabilistic estimation of pedestrian routes over non-overlapping views , 2005 .