Tracking 3D shapes in noisy point clouds with Random Hypersurface Models

Depth sensors such as the Microsoft Kinect™ depth sensor provide three dimensional point clouds of an observed scene. In this paper, we employ Random Hypersurface Models (RHMs), which is a modeling technique for extended object tracking, to point cloud fusion in order to track a shape approximation of an underlying object. We present a novel variant of RHMs to model shapes in 3D space. Based on this novel model, we develop a specialized algorithm to track persons by approximating their shapes as cylinders. For evaluation, we utilize a Kinect network and simulations based on a stochastic sensor model.

[1]  Ronald P. S. Mahler,et al.  PHD filters for nonstandard targets, I: Extended targets , 2009, 2009 12th International Conference on Information Fusion.

[2]  Dietrich Fränken,et al.  Tracking of Extended Objects and Group Targets Using Random Matrices , 2008, IEEE Transactions on Signal Processing.

[3]  Thia Kirubarajan,et al.  Estimation with Applications to Tracking and Navigation: Theory, Algorithms and Software , 2001 .

[4]  Reinhard Klein,et al.  Efficient RANSAC for Point‐Cloud Shape Detection , 2007, Comput. Graph. Forum.

[5]  |Marcus Baum,et al.  Random Hypersurface Models for extended object tracking , 2009, 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT).

[6]  Uwe D. Hanebeck,et al.  Modeling the target extent with multiplicative noise , 2012, 2012 15th International Conference on Information Fusion.

[7]  T. Başar,et al.  A New Approach to Linear Filtering and Prediction Problems , 2001 .

[8]  Uwe D. Hanebeck,et al.  Intelligent sensor-scheduling for multi-kinect-tracking , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  H Eberhardt,et al.  Optimal dirac approximation by exploiting independencies , 2010, Proceedings of the 2010 American Control Conference.

[10]  Simon J. Godsill,et al.  Poisson models for extended target and group tracking , 2005, SPIE Optics + Photonics.

[11]  D. Salmond,et al.  Spatial distribution model for tracking extended objects , 2005 .

[12]  Hrvoje Benko,et al.  Combining multiple depth cameras and projectors for interactions on, above and between surfaces , 2010, UIST.

[13]  Torsten Söderström,et al.  Errors-in-variables methods in system identification , 2018, Autom..

[14]  Qionghai Dai,et al.  A Point-Cloud-Based Multiview Stereo Algorithm for Free-Viewpoint Video , 2010, IEEE Transactions on Visualization and Computer Graphics.

[15]  Rüdiger Dillmann,et al.  Sensor fusion for 3D human body tracking with an articulated 3D body model , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[16]  Andrew W. Fitzgibbon,et al.  KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera , 2011, UIST.

[17]  Jeffrey K. Uhlmann,et al.  Unscented filtering and nonlinear estimation , 2004, Proceedings of the IEEE.

[18]  Uwe D. Hanebeck,et al.  Shape tracking of extended objects and group targets with star-convex RHMs , 2011, 14th International Conference on Information Fusion.

[19]  Uwe D. Hanebeck,et al.  A novel Bayesian method for fitting a circle to noisy points , 2010, 2010 13th International Conference on Information Fusion.

[20]  Toshiaki Fujii,et al.  Free-Viewpoint TV , 2011, IEEE Signal Processing Magazine.

[21]  Henry Fuchs,et al.  Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras , 2011, 2011 10th IEEE International Symposium on Mixed and Augmented Reality.

[22]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[23]  Alexander Scholz,et al.  Multiple Kinect Studies , 2011 .