Pointing Gesture Interface for Large Display Environments Based on the Kinect Skeleton Model

Even though many three-dimensional pointing gesture recognition methods have been researched, the problem of self-occlusion has not been considered. When two positions are used to define a pointing vector on a single camera perspective line, one position can occlude the other, which causes detection inaccuracies. In this paper, we propose a pointing gesture recognition method for large display environments based on the Kinect skeleton model. By taking the self-occlusion problem into account, a person’s detected shoulder position is compensated for in the case of a hand occlusion. By using exception handling for self-occlusions, experimental results indicate that the pointing accuracy of a specific reference position is greatly improved. The average root mean square error was approximately 13 pixels in 1920×1080 screen resolution.