Functional Descriptors for Object Affordances

In the context of robot learning from demonstration, it is very important that a robot understands what an object can be used for. By observing a human performing an activity, a robot should be able to identify the human motion, the objects involved and the outcome of the performed activity. One important aspect of this challenging problem is to detect and reason about objects in terms of affordance or alternatively, about their function in the current activity. Affordance is often modeled in terms of appearance however appearance does not necessarily map one-to-one with functional classes. In this paper we propose two alternative features that characterize objects directly in terms of how they are used. Our approach show a significant improvement compared to the traditional appearance based methods.

[1]  Manuel Lopes,et al.  Learning Object Affordances: From Sensory--Motor Coordination to Imitation , 2008, IEEE Transactions on Robotics.

[2]  Danica Kragic,et al.  The Path Kernel , 2013, ICPRAM.

[3]  Nello Cristianini,et al.  Classification using String Kernels , 2000 .

[4]  Eren Erdal Aksoy,et al.  Categorizing object-action relations from semantic scene graphs , 2010, 2010 IEEE International Conference on Robotics and Automation.

[5]  Luc Van Gool,et al.  What makes a chair a chair? , 2011, CVPR 2011.

[6]  Hedvig Kjellström,et al.  Functional object descriptors for human activity modeling , 2013, 2013 IEEE International Conference on Robotics and Automation.

[7]  J. Gibson The Ecological Approach to Visual Perception , 1979 .

[8]  Hedvig Kjellström,et al.  Recognizing object affordances in terms of spatio-temporal object-object relationships , 2014, 2014 IEEE-RAS International Conference on Humanoid Robots.