Identification of Unmodeled Objects from Symbolic Descriptions

Successful human-robot cooperation hinges on each agent's ability to process and exchange information about the shared environment and the task at hand. Human communication is primarily based on symbolic abstractions of object properties, rather than precise quantitative measures. A comprehensive robotic framework thus requires an integrated communication module which is able to establish a link and convert between perceptual and abstract information. The ability to interpret composite symbolic descriptions enables an autonomous agent to a) operate in unstructured and cluttered environments, in tasks which involve unmodeled or never seen before objects; and b) exploit the aggregation of multiple symbolic properties as an instance of ensemble learning, to improve identification performance even when the individual predicates encode generic information or are imprecisely grounded. We propose a discriminative probabilistic model which interprets symbolic descriptions to identify the referent object contextually w.r.t.\ the structure of the environment and other objects. The model is trained using a collected dataset of identifications, and its performance is evaluated by quantitative measures and a live demo developed on the PR2 robot platform, which integrates elements of perception, object extraction, object identification and grasping.

[1]  Marc Toussaint,et al.  Newton methods for k-order Markov Constrained Motion Problems , 2014, ArXiv.

[2]  Ali Farhadi,et al.  Describing objects by their attributes , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[3]  Rainer Stiefelhagen,et al.  “Look at this!” learning to guide visual saliency in human-robot interaction , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Luc Steels,et al.  Language games for autonomous robots , 2001 .

[5]  Luc Steels,et al.  The Origins of Syntax in Visually Grounded Robotic Agents , 1997, IJCAI.

[6]  Ross A. Knepper,et al.  Asking for Help Using Inverse Semantics , 2014, Robotics: Science and Systems.

[7]  Alexandre Bernardino,et al.  Language Bootstrapping: Learning Word Meanings From Perception–Action Association , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[8]  Christoph H. Lampert,et al.  Learning to detect unseen object classes by between-class attribute transfer , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[9]  L. Steels Constructing and Sharing Perceptual Distinctions , 1997 .