Improved CNN-Based Marker Labeling for Optical Hand Tracking

Hand tracking is essential in many applications reaching from the creation of CGI movies to medical applications and even realtime, natural, physically-based grasping in VR. Optical marker-based tracking is often the method of choice because of its high accuracy, the support for large workspaces, good performance, and there is no wiring of the user required. However, the tracking algorithms may fail in case of hand poses where some of the markers are occluded. These cases require a subsequent reassignment of labels to reappearing markers. Currently, convolutional neural networks (CNN) show promising results for this re-labeling because they are relatively stable and real-time capable. In this paper, we present several methods to improve the accuracy of label predictions using CNNs. The main idea is to improve the input to the CNNs, which is derived from the output of the optical tracking system. To do so, we propose a method based on principal component analysis, a projection method that is perpendicular to the palm, and a multi-image approach. Our results show that our methods provide better label predictions than current state-of-the-art algorithms, and they can be even extended to other tracking applications.

[1]  Miguel A. Otaduy,et al.  Soft Hand Simulation for Smooth and Robust Natural Interaction , 2018, 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR).

[2]  Joan Condell,et al.  IMU Sensor-Based Electronic Goniometric Glove for Clinical Finger Movement Analysis , 2018, IEEE Sensors Journal.

[3]  Gernot Riegler,et al.  OctNet: Learning Deep 3D Representations at High Resolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Otmar Hilliges,et al.  Cross-Modal Deep Variational Hand Pose Estimation , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[5]  Christian Theobalt,et al.  GANerated Hands for Real-Time 3D Hand Tracking from Monocular RGB , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[6]  Tien-Tsin Wong,et al.  Deep unsupervised pixelization , 2018, ACM Trans. Graph..

[7]  Junsong Yuan,et al.  Hand PointNet: 3D Hand Pose Estimation Using Point Sets , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[8]  Michael Neff,et al.  Data-driven glove calibration for hand motion capture , 2013, SCA '13.

[9]  Alberto Lozano-Rodero,et al.  Natural and hybrid bimanual interaction for virtual assembly tasks , 2013, Virtual Reality.

[10]  Ronan Boulic,et al.  Real-time finger tracking using active motion capture: a neural network approach robust to occlusions , 2018, MIG.

[11]  Wolfram Burgard,et al.  Online marker labeling for fully automatic skeleton tracking in optical motion capture , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[12]  Olga Sorkine-Hornung,et al.  Interactive hand pose estimation using a stretch-sensing soft glove , 2019, ACM Trans. Graph..

[13]  Junghsi Lee,et al.  Design of an Inertial-Sensor-Based Data Glove for Hand Function Evaluation , 2018, Sensors.

[14]  Kenrick Kin,et al.  Online optical marker-based hand tracking with deep labels , 2018, ACM Trans. Graph..

[15]  Nikolaus F. Troje,et al.  Auto-labelling of Markers in Optical Motion Capture by Permutation Learning , 2019, CGI.

[16]  Yaser Sheikh,et al.  Hand Keypoint Detection in Single Images Using Multiview Bootstrapping , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[17]  Jonathan Maycock,et al.  Fully automatic optical motion tracking using an inverse kinematics approach , 2015, 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids).

[18]  Tae-Kyun Kim,et al.  Augmented Skeleton Space Transfer for Depth-Based Hand Pose Estimation , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[19]  Wolfram Burgard,et al.  Automatic initialization for skeleton tracking in optical motion capture , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[20]  Miguel A. Otaduy,et al.  Real-time pose and shape reconstruction of two interacting hands with a single depth camera , 2019, ACM Trans. Graph..

[21]  P. Olivier,et al.  Accuracy of the Microsoft Kinect sensor for measuring movement in people with Parkinson's disease. , 2014, Gait & posture.

[22]  Andreas Aristidou,et al.  Real-time marker prediction and CoR estimation in optical motion capture , 2011, The Visual Computer.

[23]  Jonas Beskow,et al.  Real-time labeling of non-rigid motion capture marker sets , 2017, Comput. Graph..