Affordance graph: A framework to encode perspective taking and effort based affordances for day-to-day human-robot interaction

Analyzing affordances has its root in socio-cognitive development of primates. Knowing what the environment, including other agents, can offer in terms of action capabilities is important for our day-to-day interaction and cooperation. In this paper, we will merge two complementary aspects of affordances: from agent-object perspective, what an agent afford to do with an object, and from agent-agent perspective, what an agent can afford to do for other agent, and present a unified notion of Affordance Graph. The graph will encode affordances for a variety of tasks: take, give, pick, put on, put into, show, hide, make accessible, etc. Another novelty will be to incorporate the aspects of effort and perspective-taking in constructing such graph. Hence, the Affordance Graph will tell about the action-capabilities of manipulating the objects among the agents and across the places, along with the information about the required level of efforts and the potential places. We will also demonstrate some interesting applications.

[1]  J. Flavell,et al.  Young children's knowledge about visual perception: Further evidence for the Level 1–Level 2 distinction. , 1981 .

[2]  D. Norman The psychology of everyday things", Basic Books Inc , 1988 .

[3]  P. Rochat Perceived reachability for self and for others by 3- to 5-year-old children and adults. , 1995, Journal of experimental child psychology.

[4]  E. Gibson Perceptual Learning in Development: Some Basic Concepts , 2000 .

[5]  L. S. Mark,et al.  How Do Task Characteristics Affect the Transitions Between Seated and Standing Reaches? , 2001 .

[6]  J.-P. Laumond,et al.  Move3D: A generic platform for path planning , 2001, Proceedings of the 2001 IEEE International Symposium on Assembly and Task Planning (ISATP2001). Assembly and Disassembly in the Twenty-first Century. (Cat. No.01TH8560).

[7]  L. S. Mark,et al.  Scaling affordances for human reach actions. , 2004, Human movement science.

[8]  Alexander Stoytchev,et al.  Behavior-Grounded Representation of Tool Affordances , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[9]  J. Gregory Trafton,et al.  Enabling effective human-robot interaction using perspective-taking in robots , 2005, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[10]  Alessandro Saffiotti,et al.  Affordances in an Ecology of Physically Embedded Intelligent Systems , 2006, Towards Affordance-Based Robot Control.

[11]  Andrea Lockerd Thomaz,et al.  Using perspective taking to learn from ambiguous demonstrations , 2006, Robotics Auton. Syst..

[12]  Frédéric Kaplan,et al.  Interpersonal Maps: How to Map Affordances for Interaction Behaviour , 2006, Towards Affordance-Based Robot Control.

[13]  Reinhard Moratz,et al.  Affordance-Based Human-Robot Interaction , 2006, Towards Affordance-Based Robot Control.

[14]  Maya Cakmak,et al.  To Afford or Not to Afford: A New Formalization of Affordances Toward Affordance-Based Robot Control , 2007, Adapt. Behav..

[15]  E. Sahin,et al.  Curiosity-driven learning of traversability affordance on a mobile robot , 2007, 2007 IEEE 6th International Conference on Development and Learning.

[16]  Manuel Lopes,et al.  Affordance-based imitation learning in robots , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[17]  Rachid Alami,et al.  SHARY: A Supervision System Adapted to Human-Robot Interaction , 2008, ISER.

[18]  三嶋 博之 The theory of affordances , 2008 .

[19]  Rachid Alami,et al.  A Task Planner for an Autonomous Social Robot , 2008, DARS.

[20]  M. Gharbi,et al.  A sampling-based path planner for dual-arm manipulation , 2008, 2008 IEEE/ASME International Conference on Advanced Intelligent Mechatronics.

[21]  Riichiro Tadakuma,et al.  Towards shared attention through geometric reasoning for Human Robot Interaction , 2009, 2009 9th IEEE-RAS International Conference on Humanoid Robots.

[22]  Natalie Sebanz,et al.  Prediction in Joint Action: What, When, and Where , 2009, Top. Cogn. Sci..

[23]  R. Alami,et al.  Mightability maps: A perceptual level decisional framework for co-operative and competitive human-robot interaction , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[24]  J. Witt Action’s Effect on Perception , 2011 .

[25]  S. Greenberg,et al.  The Psychology of Everyday Things , 2012 .

[26]  D. Mitchell Wilkes,et al.  Learning structural affordances through self-exploration , 2012, 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication.

[27]  Rachid Alami,et al.  Taskability Graph: Towards analyzing effort based agent-agent affordances , 2012, 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication.

[28]  Daniel Sidobre,et al.  Efficient models for grasp planning with a multi-fingered hand , 2012, Robotics Auton. Syst..

[29]  Luc De Raedt,et al.  Learning relational affordance models for robots in multi-object manipulation tasks , 2012, 2012 IEEE International Conference on Robotics and Automation.

[30]  Hema Swetha Koppula,et al.  Learning human activities and object affordances from RGB-D videos , 2012, Int. J. Robotics Res..

[31]  Rachid Alami,et al.  An interface for interleaved symbolic-geometric planning and backtracking , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[32]  A. Pandey Towards Task Understanding through Multi-State VisuoSpatial Perspective Taking for Human-Robot Interaction , .