Bi-manual gesture interaction for 3D cloud point selection and annotation using COTS

This paper presents our solution to the 3DUI 2014 Contest which is about selection and annotation of 3D point cloud data. This challenge is a classic problem that, if solved and implemented correctly, can be conveniently useful in a wide range of 3D virtual reality applications and environments. Our approach considers a robust, simple and intuitive solution based on bi-manual interaction gestures. We provide a first-person navigation mode for data exploration, point selection and annotation offering a straightforward and intuitive approach for navigation using one's hands. Using a bi-manual gesture interface, the user is able to perform simple but powerful gestures to navigate through the 3D point cloud data within a 3D modelling tool to explore, select and/or annotate points. The implementation is based on COTS (Commercial OFF the Shelf Systems). For modelling and annotation purposes we adopted a widely available open-source tool for 3D editing called Blender. For gesture recognition we adopted the low cost Leap Motion desktop system. We also performed an informal user study that showed the intuitiveness of our solution: users were able to use our system fairly easily, with a fast learning curve.