Augmenting Indirect Multi-Touch Interaction with 3D Hand Contours and Skeletons

This work in progress aims at making indirect multi-touch interaction more usable by providing 3D visualizations of the hands and fingers so the user can continuously know their positions before an interaction occurs. We use depth sensing cameras to track the user's hands above the surface and to recognize the point of interaction with a plain horizontal surface at a predefined height. This allows us to support various visual augmentation techniques such as visualizations of 3D hand contours, skeletons, and fingertips that provide visual cues for depth estimation when the hand is above the surface as well as cues for when touching the surface. The purpose is to provide the users with effective and intuitive indirect multi-touch interaction on a regular desktop PC.