Scene Interpretation for Self-Aware Cognitive Robots

We propose a visual scene interpretation system for cognitive robots to maintain a consistent world model about their environments. This interpretation system is for our lifelong experimental learning framework that allows robots analyze failure contexts to ensure robustness in their future tasks. To efficiently analyze failure contexts, scenes should be interpreted appropriately. In our system, LINE-MOD and HS histograms are used to recognize objects with/without textures. Moreover, depth-based segmentation is applied for identifying unknown objects in the scene. This information is also used to augment the recognition performance. The world model includes not only the objects detected in the environment but also their spatial relations to efficiently represent contexts. Extracting unary and binary relations such as on, on_ground, clear and near is useful for symbolic representation of the scenes. We test the performance of our system on recognizing objects, determining spatial predicates, and maintaining consistency of the world model of the robot in the real world. Our preliminary results reveal that our system can be successfully used to extract spatial relations in a scene and to create a consistent model of the world by using the information gathered from the onboard RGB-D sensor as the robot explores its environment.

[1]  John D. Kelleher,et al.  Towards a Cognitive System that Can Recognize Spatial Regions Based on Context , 2012, AAAI.

[2]  Stephen Gould,et al.  Efficient Extraction and Representation of Spatial Information from Video Data , 2013, IJCAI.

[3]  Zoltan-Csaba Marton,et al.  Tutorial: Point Cloud Library: Three-Dimensional Object Recognition and 6 DOF Pose Estimation , 2012, IEEE Robotics & Automation Magazine.

[4]  Patric Jensfelt,et al.  Topological spatial relations for active visual search , 2012, Robotics Auton. Syst..

[5]  Jos Elfring,et al.  Semantic world modeling using probabilistic multiple hypothesis anchoring , 2013, Robotics Auton. Syst..

[6]  Anthony G. Cohn,et al.  A Spatial Logic based on Regions and Connection , 1992, KR.

[7]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[8]  Vincent Lepetit,et al.  Gradient Response Maps for Real-Time Detection of Textureless Objects , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Andreas Nüchter,et al.  Challenges in Using Semantic Knowledge for 3D Object Classification , 2013, KIK@KI.

[10]  Sanem Sariel,et al.  A Robust Planning Framework for Cognitive Robots , 2012, CogRob@AAAI.

[11]  Andrew U. Frank,et al.  Qualitative Spatial Reasoning with Cardinal Directions , 1991, ÖGAI.

[12]  Rüdiger Dillmann,et al.  Using spatial relations of objects in real world scenes for scene structuring and scene understanding , 2011, 2011 15th International Conference on Advanced Robotics (ICAR).

[13]  Petek Yildiz,et al.  Learning Guided Planning for Robust Task Execution in Cognitive Robotics , 2013, AAAI 2013.

[14]  Bernd Neumann,et al.  On scene interpretation with description logics , 2006, Image Vis. Comput..

[15]  Hulya Yalcin,et al.  Extracting Spatial Relations Among Objects for Failure Detection , 2013, KIK@KI.

[16]  Zoe Falomir,et al.  Describing Images Using Qualitative Models and Description Logics , 2011, Spatial Cogn. Comput..