Coverage evaluation of camera networks for facilitating big-data management in film production

Film production inherently generates large amounts of data-at a rate of 27TB/hour for a conventional multicamera setup [1]. In this paper, we propose a video coverage monitoring framework for such setups, which enables the identification of problematic sensor configurations, and therefore significantly reduces the data volume by eliminating unusable material before it is generated. Our approach involves analysing the projection of a set of 3D volume elements on the cameras, to verify whether they satisfy a number of constraints predicting the success of specified tasks. We demonstrate the utility of the proposed framework on three use cases, and conclude that our approach facilitates the development of tools with considerable practical value.

[1]  Adrian Hilton,et al.  Evaluation of 3D Feature Descriptors for Multi-modal Data Registration , 2013, 2013 International Conference on 3D Vision.

[2]  V. Chvátal A combinatorial theorem in plane geometry , 1975 .

[3]  Thinh Nguyen,et al.  Optimal Visual Sensor Network Configuration , 2009, Multi-Camera Networks.

[4]  Steven M. Seitz,et al.  Photo tourism: exploring photo collections in 3D , 2006, ACM Trans. Graph..

[5]  Peter Kovesi,et al.  Automatic Sensor Placement from Vision Task Requirements , 1988, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Jean-Yves Guillemaut,et al.  Calibration of Nodal and Free-Moving Cameras in Dynamic Scenes for Post-Production , 2011, 2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission.

[7]  Luiz Affonso Guedes,et al.  The Coverage Problem in Video-Based Wireless Sensor Networks: A Survey , 2010, Sensors.

[8]  Jean-Yves Guillemaut,et al.  Moving Camera Registration for Multiple Camera Setups in Dynamic Scenes , 2010, BMVC.

[9]  R. Lienhart,et al.  On the optimal placement of multiple visual sensors , 2006, VSSN '06.

[10]  Xiang Chen,et al.  Task-oriented optimal view selection in a calibrated multi-camera system , 2012, 2012 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM).

[11]  Pascal Fua,et al.  Efficient large-scale multi-view stereo for ultra high-resolution image sets , 2011, Machine Vision and Applications.

[12]  Y.F. Li,et al.  Automatic sensor placement for model-based robot vision , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[13]  Giordano Fusco,et al.  Selection and Orientation of Directional Sensors for Coverage Maximization , 2009, 2009 6th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks.

[14]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[15]  A. Kak,et al.  A Look-up Table Based Approach for Solving the Camera Selection Problem in Large Camera Networks , 2006 .

[16]  Stan Sclaroff,et al.  Automated camera layout to satisfy task-specific and floor plan-specific coverage requirements , 2006, Comput. Vis. Image Underst..

[17]  Mongi A. Abidi,et al.  Can You See Me Now? Sensor Positioning for Automated and Persistent Surveillance , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[18]  Cordelia Schmid,et al.  A Performance Evaluation of Local Descriptors , 2005, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  Eric Sommerlade,et al.  Probabilistic surveillance with multiple active cameras , 2010, 2010 IEEE International Conference on Robotics and Automation.

[20]  Xiang Chen,et al.  Modeling Coverage in Camera Networks: A Survey , 2012, International Journal of Computer Vision.