Visual Tracking of Self-occluding Articulated Objects Contents 1 Introduction 1 2 a Framework for Tracking Self-occluding Objects 2

Computer sensing of hand and limb motion is an important problem for applications in human-computer interaction, virtual reality, and athletic performance measurement. We describe a framework for local tracking of self-occluding motion, in which parts of the mechanism obstruct each others visibility to the camera. Our approach uses a kinematic model to predict occlusion and windowed templates to track partially occluded objects. We analyze our model of self-occlusion, discuss the implementation of our algorithm, and give experimental results for 3D hand tracking under signi cant amounts of self-occlusion. These results extend the DigitEyes system for articulated tracking described in [22, 21] to handle self-occluding motions.

[1]  Alex Pentland,et al.  Space-time gestures , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Luc Robert,et al.  Camera calibration without feature extraction , 1994, Proceedings of 12th International Conference on Pattern Recognition.

[3]  Mark de Berg,et al.  Perfect Binary Space Partitions , 1993, Comput. Geom..

[4]  John Canny,et al.  The complexity of robot motion planning , 1988 .

[5]  Demetri Terzopoulos,et al.  Energy Constraints on Deformable Models: Recovering Shape and Non-Rigid Motion , 1987, AAAI.

[6]  David Mumford,et al.  The 2.1-D sketch , 1990, [1990] Proceedings Third International Conference on Computer Vision.

[7]  A. Pentland,et al.  Robust estimation of a multi-layered motion representation , 1991, Proceedings of the IEEE Workshop on Visual Motion.

[8]  Robert J. Holt,et al.  Determining articulated motion from perspective views: a decomposition approach , 1997, Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects.

[9]  Thomas Ertl,et al.  Computer Graphics - Principles and Practice, 3rd Edition , 2014 .

[10]  Henry Fuchs,et al.  On visible surface generation by a priori tree structures , 1980, SIGGRAPH '80.

[11]  Masanobu Yamamoto,et al.  Human motion analysis based on a robot arm model , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[12]  Dimitris N. Metaxas,et al.  Shape and Nonrigid Motion Estimation Through Physics-Based Synthesis , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Mark W. Spong,et al.  Robot dynamics and control , 1989 .

[14]  Edward H. Adelson,et al.  Layered representation for motion analysis , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[15]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[16]  David C. Hogg Model-based vision: a program to see a walking person , 1983, Image Vis. Comput..

[17]  J. O'Rourke,et al.  Model-based image analysis of human motion using constraint propagation , 1980, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Takeo Kanade,et al.  DigitEyes: Vision-Based Human Hand Tracking , 1993 .

[19]  James M. Rehg,et al.  Visual tracking with deformation models , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[20]  Takeo Kanade,et al.  Visual Tracking of High DOF Articulated Structures: an Application to Human Hand Tracking , 1994, ECCV.

[21]  Takeo Kanade,et al.  DigitEyes: vision-based hand tracking for human-computer interaction , 1994, Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects.

[22]  Katsushi Ikeuchi,et al.  Grasp Recognition Using The Contact Web , 1992, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.

[23]  Alex Pentland,et al.  Recovery of Nonrigid Motion and Structure , 1991, IEEE Trans. Pattern Anal. Mach. Intell..