GMM-based Spatial Change Detection from Bimanual Tracking and Point Cloud Differences

Robots that detect changes in the environment can attain better context awareness and increased autonomy. In this work, a spatial change detection approach is presented which uses a single fixed depth camera to identify environment changes caused by human activities. The proposed method combines hand tracking and the difference between organized point clouds. Bimanual movements are recorded in real-time and encoded in Gaussian Mixture Models (GMMs). We show that GMMs enable change detection in presence of occlusions. We also show that the GMM analysis narrows down potential salient regions of space where manipulation actions are carried out. Experiments have been performed in an indoor environment for object placement, object removal and object repositioning tasks.

[1]  Katsushi Ikeuchi,et al.  Toward automatic robot instruction from perception-temporal segmentation of tasks from human hand motion , 1993, IEEE Trans. Robotics Autom..

[2]  Jorge Dias,et al.  Extracting data from human manipulation of objects towards improving autonomous robotic grasping , 2012, Robotics Auton. Syst..

[4]  Eren Erdal Aksoy,et al.  Learning the semantics of object–action relations by observation , 2011, Int. J. Robotics Res..

[5]  Jonathan Feng-Shun Lin,et al.  Online Segmentation of Human Motion for Automated Rehabilitation Exercise Analysis , 2014, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[6]  Bhaskara Marthi,et al.  Object disappearance for object discovery , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Rares Ambrus,et al.  Meta-rooms: Building and maintaining long term spatial models in a dynamic world , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[8]  Markus Vincze,et al.  Autonomous Learning of Object Models on a Mobile Robot , 2017, IEEE Robotics and Automation Letters.

[9]  James M. Rehg,et al.  Modeling Actions through State Changes , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[10]  Mario Fernando Montenegro Campos,et al.  Novelty detection and segmentation based on Gaussian mixture models: A case study in 3D robotic laser mapping , 2013, Robotics Auton. Syst..

[11]  Sang Hyoung Lee,et al.  Learning basis skills by autonomous segmentation of humanoid motion trajectories , 2012, 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012).

[12]  Darius Burschka,et al.  Representation of manipulation-relevant object properties and actions for surprise-driven exploration , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Mohammed Yeasin,et al.  Toward automatic robot programming: learning human skill from visual data , 2000, IEEE Trans. Syst. Man Cybern. Part B.

[14]  Stefano Caselli,et al.  A KinFu based approach for robot spatial attention and view planning , 2016, Robotics Auton. Syst..

[15]  John J. Leonard,et al.  Toward lifelong object segmentation from change detection in dense RGB-D maps , 2013, 2013 European Conference on Mobile Robots.

[16]  Dieter Fox,et al.  RGB-D object discovery via multi-scene analysis , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.