Through-the-Lens Synchronisation for Heterogeneous Camera Networks

Camera synchronisation involves the temporal alignment of a set of video sequences, independently acquired by two or more cameras. Accurate synchronisation is crucial for a wide variety of applications requiring multi-camera setups, ranging from 3D modelling of dynamic scenes (e.g., featuring a performance, or a sports event) to video surveillance and superresolution. Conventional synchronisation methods, which typically rely on hardware or audio signals, have practical limitations, imposing constraints on the size and the span of the network [2][1]. Through-the-lens synchronisation offers a robust and flexible way to synchronise a camera network from the content it generates. In this paper, we propose a bottom-up synchronisation algorithm to estimate a frame rate and an offset for each member of a network composed of 2 or more cameras. Our approach involves the computation of a relative synchronisation estimate between each camera pair, from which the absolute synchronisation parameters of the individual cameras are calculated (Figure 1). The algorithm can handle hybrid networks of static and moving cameras with different resolutions and frame rates, and does not require rigid objects, long trajectories or overlapping fields-of-view beyond 2 cameras. It needs a set of image features on the dynamic scene elements, and the geometric relation between the images (which can be obtained from the static background features). Relative Synchronisation: The frame indices of the jth camera (t j) with respect to those of ith (ti) is defined by the line

[1]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[2]  Tanveer F. Syeda-Mahmood,et al.  View-invariant alignment and matching of video sequences , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[3]  Marc Pollefeys,et al.  Camera Network Calibration and Synchronization from Silhouettes in Archived Video , 2010, International Journal of Computer Vision.

[4]  Ian D. Reid,et al.  Video synchronization from human motion using rank constraints , 2009, Comput. Vis. Image Underst..

[5]  Peter Kovesi,et al.  Using Space-Time Interest Points for Video Sequence Synchronization , 2007, MVA.

[6]  Hans-Peter Seidel,et al.  Markerless Motion Capture with unsynchronized moving cameras , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  Cheng Lei,et al.  Tri-focal tensor-based multiple video synchronization with subframe optimization , 2006, IEEE Transactions on Image Processing.

[8]  Lior Wolf,et al.  Wide Baseline Matching between Unsynchronized Video Sequences , 2006, International Journal of Computer Vision.

[9]  Xin Li,et al.  Subframe Video Synchronization via 3D Phase Correlation , 2006, 2006 International Conference on Image Processing.

[10]  M. Pollefeys,et al.  VIDEO SYNCHRONIZATION VIA SPACE-TIME INTEREST POINT DISTRIBUTION , 2004 .

[11]  Marcus A. Magnor,et al.  Subframe Temporal Alignment of Non-Stationary Cameras , 2008, BMVC.

[12]  Tim J. Ellis,et al.  Multi camera image tracking , 2006, Image Vis. Comput..

[13]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[14]  Kiriakos N. Kutulakos,et al.  Linear Sequence-to-Sequence Alignment , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  Adrian Hilton,et al.  Wand-based Multiple Camera Studio Calibration , 2007 .

[16]  Ingemar J. Cox,et al.  A Maximum Likelihood Stereo Algorithm , 1996, Comput. Vis. Image Underst..

[17]  Jean-Yves Guillemaut,et al.  Joint Multi-Layer Segmentation and Reconstruction for Free-Viewpoint Video Applications , 2011, International Journal of Computer Vision.

[18]  Joan Serrat,et al.  Video Alignment for Change Detection , 2011, IEEE Transactions on Image Processing.

[19]  Luc Van Gool,et al.  Synchronizing video sequences , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[20]  Christian Bauckhage,et al.  Efficient and Robust Alignment of Unsynchronized Video Sequences , 2011, DAGM-Symposium.

[21]  PollefeysMarc,et al.  Camera Network Calibration and Synchronization from Silhouettes in Archived Video , 2010 .

[22]  Denis Simakov,et al.  Feature-Based Sequence-to-Sequence Matching , 2006, International Journal of Computer Vision.

[23]  Cordelia Schmid,et al.  A performance evaluation of local descriptors , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[24]  Yael Moses,et al.  Video Synchronization Using Temporal Signals from Epipolar Lines , 2010, ECCV.

[25]  J.-Y. Bouguet,et al.  Pyramidal implementation of the lucas kanade feature tracker , 1999 .

[26]  Jean-Yves Guillemaut,et al.  Calibration of Nodal and Free-Moving Cameras in Dynamic Scenes for Post-Production , 2011, 2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission.

[27]  Yaron Caspi,et al.  Under the supervision of , 2003 .

[28]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .