Markerless Motion Capture using multiple Color-Depth Sensors

With the advent of the Microsoft Kinect, renewed focus has been put on monocular depth-based motion capturing. However, this approach is limited in that an actor has to move facing the camera. Due to the active light nature of the sensor, no more than one device has been used for motion capturing so far. In effect, any pose estimation must fail for poses occluded to the depth camera. Our work investigates on reducing or mitigating the detrimental effects of multiple active light emitters, thereby allowing motion capture from all angles. We systematically evaluate the concurrent use of one to four Kinects, including calibration, error measures and analysis, and present a time-multiplexing approach.

[1]  Stephen H. Westin,et al.  Image-based bidirectional reflectance distribution function measurement. , 2000, Applied optics.

[2]  G. Iddan,et al.  3D IMAGING IN THE STUDIO (AND ELSEWHERE...) , 2001 .

[3]  Hans-Peter Seidel,et al.  Free-viewpoint video of human actors , 2003, ACM Trans. Graph..

[4]  Tomás Svoboda A software for complete calibration of multicamera systems , 2005, IS&T/SPIE Electronic Imaging.

[5]  Tomás Svoboda,et al.  A Convenient Multicamera Self-Calibration for Virtual Environments , 2005, Presence: Teleoperators & Virtual Environments.

[6]  Neil D. Lawrence,et al.  MOCAP Toolbox for MATLAB , 2005 .

[7]  Steven M. Seitz,et al.  Photo tourism: exploring photo collections in 3D , 2006, ACM Trans. Graph..

[8]  Adrian Hilton,et al.  A survey of advances in vision-based human motion capture and analysis , 2006, Comput. Vis. Image Underst..

[9]  Reinhard Koch,et al.  Extraction of 3D freeform surfaces as visual landmarks for real-time tracking , 2007, Journal of Real-Time Image Processing.

[10]  Hans-Peter Seidel,et al.  Marker-less Deformable Mesh Tracking for Human Shape and Motion Capture , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[11]  Reinhard Koch,et al.  Robust GPU-assisted camera tracking using free-form surface models , 2007, Journal of Real-Time Image Processing.

[12]  Rasmus Larsen,et al.  TOF imaging in Smart room environments towards improved people tracking , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[13]  Young Min Kim,et al.  Design and calibration of a multi-view TOF sensor fusion system , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[14]  Hans-Peter Seidel,et al.  Performance capture from sparse multi-view video , 2008, ACM Trans. Graph..

[15]  Wolfgang Straßer,et al.  On-the-fly scene acquisition with a handy multi-sensor system , 2008, Int. J. Intell. Syst. Technol. Appl..

[16]  Marcus A. Magnor,et al.  Subframe Temporal Alignment of Non-Stationary Cameras , 2008, BMVC.

[17]  Li Guan,et al.  3D Object Reconstruction with Heterogeneous Sensor Data , 2008 .

[18]  Hans-Peter Seidel,et al.  Markerless Motion Capture with unsynchronized moving cameras , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[19]  Reinhard Koch,et al.  Time-of-Flight Sensors in Computer Graphics , 2009, Eurographics.

[20]  Hrvoje Benko,et al.  Combining multiple depth cameras and projectors for interactions on, above and between surfaces , 2010, UIST.