Affordance triggering for arbitrary states based on robot exploring

An affordance-based method is a biology-inspired way to promote the cognitive capabilities of robots. Affordances encode the relationships between a robot and environment in terms of actions that the robot is able to perform. The most notable feature of affordance-based perception is that an object is perceived by its functional attributes (e.g., reachable, graspable, etc) instead of its visual attributes (e.g., size, shape, etc). Most existing works treat affordance prediction as a binary classification problem and assume that an affordance never hides. However, sometimes the robot needs to adapt itself, i.e., rotate its wrist or turn left, to trigger the hidden affordance because the interaction point is very important and the object might be placed in any state. In this paper, we consider affordance triggering for arbitrary states as a regression problem. In the experiment, a manipulative robot could rotate its wrist to find the graspable affordance regardless the state of an object.

[1]  Giulio Sandini,et al.  Developmental robotics: a survey , 2003, Connect. Sci..

[2]  James M. Rehg,et al.  Decoupling behavior, perception, and control for autonomous learning of affordances , 2013, 2013 IEEE International Conference on Robotics and Automation.

[3]  Jin-Hui Zhu,et al.  Affordance Research in Developmental Robotics: A Survey , 2016, IEEE Transactions on Cognitive and Developmental Systems.

[4]  Jean Piaget,et al.  The origins of intelligence in children / Jean Piaget; transl. by Margaret Cook , 1975 .

[5]  M. Dogar,et al.  Afford or Not to Afford : A New Formalization of Affordances Toward Affordance-Based Robot , 2007 .

[6]  Yun Jiang,et al.  Hallucinated Humans as the Hidden Context for Labeling 3D Scenes , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  E. Menzel Animal Tool Behavior: The Use and Manufacture of Tools by Animals, Benjamin B. Beck. Garland STPM Press, New York and London (1980), 306, Price £24.50 , 1981 .

[8]  Oliver Kroemer,et al.  Learning grasp affordance densities , 2011, Paladyn J. Behav. Robotics.

[9]  Moritz Tenorth,et al.  The RoboEarth language: Representing and exchanging knowledge about actions, objects, and environments , 2012, 2012 IEEE International Conference on Robotics and Automation.

[10]  Gaurav S. Sukhatme,et al.  Semantic labeling of 3D point clouds with object affordance for robot manipulation , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[11]  James M. Rehg,et al.  Object categorization for affordance prediction , 2008 .

[12]  James M. Rehg,et al.  Learning Visual Object Categories for Robot Affordance Prediction , 2010, Int. J. Robotics Res..

[13]  Yukie Nagai,et al.  Staged Development of Robot Skills: Behavior Formation, Affordance Learning and Imitation with Motionese , 2015, IEEE Transactions on Autonomous Mental Development.

[14]  Stephen Hart,et al.  Learning Generalizable Control Programs , 2011, IEEE Transactions on Autonomous Mental Development.

[15]  J. Piaget,et al.  The Origins of Intelligence in Children , 1971 .

[16]  Hema Swetha Koppula,et al.  Learning human activities and object affordances from RGB-D videos , 2012, Int. J. Robotics Res..

[17]  R. Grupen,et al.  A behavioral approach to human-robot communication , 2010 .

[18]  Shih-Wen Hsiao,et al.  An online affordance evaluation model for product design , 2012 .

[19]  Jin-Hui Zhu,et al.  Goal-directed affordance prediction at the subtask level , 2016, Ind. Robot.

[20]  Justus H. Piater,et al.  Bottom-up learning of object categories, action effects and logical rules: From continuous manipulative exploration to symbolic planning , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[21]  Luc De Raedt,et al.  Learning relational affordance models for robots in multi-object manipulation tasks , 2012, 2012 IEEE International Conference on Robotics and Automation.

[22]  E. Reed The Ecological Approach to Visual Perception , 1989 .

[23]  Joseph S. Valacich,et al.  Enhancing the motivational affordance of human-computer interfaces in a cross-cultural setting , 2011 .

[24]  Emre Ugur,et al.  Traversability: A Case Study for Learning and Perceiving Affordances in Robots , 2010, Adapt. Behav..

[25]  G. Metta,et al.  Exploring affordances and tool use on the iCub , 2013, 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids).

[26]  Alexander Stoytchev,et al.  Behavior-Grounded Representation of Tool Affordances , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[27]  Towards Affordance-Based Robot Control, 05.06. - 09.06.2006 , 2006, Towards Affordance-Based Robot Control.

[28]  Ashutosh Saxena,et al.  Robotic Grasping of Novel Objects using Vision , 2008, Int. J. Robotics Res..

[29]  A. Stoytchev,et al.  Object Categorization in the Sink : Learning Behavior – Grounded Object Categories with Water , 2012 .

[30]  Manuel Lopes,et al.  Learning Object Affordances: From Sensory--Motor Coordination to Imitation , 2008, IEEE Transactions on Robotics.

[31]  Hema Swetha Koppula,et al.  Anticipating Human Activities Using Object Affordances for Reactive Robotic Response , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[32]  Koen V. Hindriks,et al.  Robot learning and use of affordances in goal-directed tasks , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[33]  J. Denavit,et al.  A kinematic notation for lower pair mechanisms based on matrices , 1955 .

[34]  Masaki Ogino,et al.  Cognitive Developmental Robotics: A Survey , 2009, IEEE Transactions on Autonomous Mental Development.

[35]  Mark Steedman,et al.  Object-Action Complexes: Grounded abstractions of sensory-motor processes , 2011, Robotics Auton. Syst..