Learning Social Affordances and Using Them for Planning

Learning Social Affordances and Using Them for Planning Kadir Firat Uyanik , Yigit Caliskan , Asil Kaan Bozcuoglu , Onur Yuruten , Sinan Kalkan , Erol Sahin {kadir, yigit, asil, oyuruten, skalkan, erol}@ceng.metu.edu.tr KOVAN Research Lab, Dept. of Computer Eng., METU, Ankara, Turkey Abstract This study extends the learning and use of affordances on robots on two fronts. First, we use the very same affordance learning framework that was used for learning the affordances of inanimate things to learn social affordances, that is affor- dances whose existence requires the presence of humans. Sec- ond, we use the learned affordances for making multi-step plans. Specifically, an iCub humanoid platform is equipped with a perceptual system to sense objects placed on a table, as well as the presence and state of humans in the environment, and a behavioral repertoire that consisted of simple object ma- nipulations as well as voice behaviors that are uttered simple verbs. After interacting with objects and humans, the robot learns a set of affordances with which it can make multi-step plans towards achieving a demonstrated goal. Introduction Motor competences of robots operating in our environments, is likely to remain inferior to ours on most fronts in the near future. In order to complete tasks that require competences beyond their abilities, the robots will need need to interact with humans in the environment towards compensating these deficiencies. The inspiration for our study comes from ba- bies and small children who can compensate the lack of their physical competences through the use of adults via social in- teraction . For instance, for a child, the reachability of a candy on a high shelf becomes possible only in the presence of an adult, as long as he can “manipulate” him properly using his social behaviors. In this paper, we extend an affordance framework proposed for robots towards learning interactions with inanimate ob- jects, to learning interactions with humans. The notion of af- fordances, proposed by Gibson (Gibson, 1986), emphasized the interaction between the agent and the environment, as op- posed to the agent or the environment only, and provided a unifying frameworks for the study. Contribution This study extends the learning and use of affordances on robots on two fronts. First, we use the very same affordance learning framework that was used for learning the affordances of inanimate things to learn social affordances 1 (viz. affor- dances whose existence requires the presence of humans). Second, we use learned affordances to make multi-step plans. In our earlier studies, we had proposed a framework that allowed the robot to learn affordances such as traversabil- ity of an environment (Ugur & S¸ahin, 2010) or graspability 1 We would like to note that the term, social affordances has been used in different contexts, e.g., for the possibilities emerging from social networks (Wellman et al., 2003), or the affordances of an en- vironment and properties of people that facilitate social interaction in a group of people (Kreijns & Kirschner, 2001). (Ugur, S¸ahin, & Oztop, 2009), liftability of objects (Dag, Atil, Kalkan, & Sahin, 2010) and showed how one can make multi- step plans using the learned affordances. In this paper, we argue that robots can use the very same framework to learn what a human may afford. Moreover, we enhance our prior study on multi-step planning (Ugur et al., 2009) via a new form of prototypes for effect representation. Specifically, we equipped the humanoid robot iCub with a perceptual system to sense tabletop objects, as well as the presence and state of humans in the environment, and a be- havioral repertoire that consisted of simple object manipu- lations and voice behaviors that uttered simple verbs. After interacting with objects and humans, we show that the robot is able to learn a set of affordances with which it can make multi-step plans towards achieving a demonstrated goal. Related Work The notion of affordances provides a perspective that puts the focus on the interaction (rather than the agent or the environ- ment) and was formalized as a relation a between an entity or environment e, a behavior b and the effect f of behavior b ¨ ¸ oluk, 2007; Monte- on e (S¸ahin, C¸akmak, Do˘gar, U˘gur, & Uc sano, Lopes, Bernardino, & Santos-Victor, 2008): a = (e, b, f ), For example, a behavior b li f t that produces an effect f li f ted on an object e cup forms an affordance relation (e cup , b li f t , f li f ted ). Note that an agent would require more of such relations on different objects and behaviors to learn more general affor- dance relations. The studies on learning and use of affordances have mostly been confined to inanimate things, such as objects (Fitzpatrick, Metta, Natale, Rao, & Sandini, 2003; Detry, Kraft, Buch, Kruger, & Piater, 2010; Atil, Dag, Kalkan, & S¸ahin, 2010; Dag et al., 2010) and tools (Sinapov & Stoytchev, 2008; Stoytchev, 2008) that the robot can interact with. In these studies, the robot interacts with the environ- ment through a set of actions, and learns to perceptually de- tect and actualize them. Moreover, with the exception of few studies (Ugur et al., 2009; Williams & Breazeal, 2012), the robots were only able to perceive the immediate affordances which can be actualized with a single-step action plan. Formalizations, such as 1, are proved to be practical with successful applications in navigation (Ugur & S¸ahin, 2010), and manipulation (Fitzpatrick et al., 2003; Detry et al., 2010; Ugur et al., 2009; Ugur, Oztop, & S¸ahin, 2011), conceptu- alization and language (Atil et al., 2010; Dag et al., 2010; Y¨ur¨uten et al., 2012), and vision (Dag et al., 2010). However,

[1]  Justus H. Piater,et al.  Refining grasp affordance models by experience , 2010, 2010 IEEE International Conference on Robotics and Automation.

[2]  Emre Ugur,et al.  Goal emulation and planning in perceptual space using learned affordances , 2011, Robotics Auton. Syst..

[3]  Barry Wellman,et al.  The Social Affordances of the Internet for Networked Individualism , 2006, J. Comput. Mediat. Commun..

[4]  Stefan Schaal,et al.  Learning and generalization of motor skills by learning from demonstration , 2009, 2009 IEEE International Conference on Robotics and Automation.

[5]  J. Sinapov,et al.  Detecting the functional similarities between tools using a hierarchical representation of outcomes , 2008, 2008 7th IEEE International Conference on Development and Learning.

[6]  Maya Cakmak,et al.  Keyframe-based Learning from Demonstration , 2012, Int. J. Soc. Robotics.

[7]  Sinan Kalkan,et al.  Affordances and Emergence of Concepts , 2010, EpiRob.

[8]  P. N. Suganthan,et al.  Robust growing neural gas algorithm with application in cluster analysis , 2004, Neural Networks.

[9]  Michael J. Richardson,et al.  Social Connection Through Joint Action and Interpersonal Coordination , 2009, Top. Cogn. Sci..

[10]  Giulio Sandini,et al.  Learning about objects through action - initial steps towards artificial cognition , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[11]  Emre Ugur,et al.  Traversability: A Case Study for Learning and Perceiving Affordances in Robots , 2010, Adapt. Behav..

[12]  Sinan Kalkan,et al.  Learning Adjectives and Nouns from Affordances on the iCub Humanoid Robot , 2012, SAB.

[13]  Maya Cakmak,et al.  To Afford or Not to Afford: A New Formalization of Affordances Toward Affordance-Based Robot Control , 2007, Adapt. Behav..

[14]  Terrence Fong,et al.  Collaboration, Dialogue, Human-Robot Interaction , 2001, ISRR.

[15]  Yukie Nagai,et al.  Learning to grasp with parental scaffolding , 2011, 2011 11th IEEE-RAS International Conference on Humanoid Robots.

[16]  J. Hodgins,et al.  Designing gaze behavior for humanlike robots , 2009 .

[17]  Cynthia Breazeal,et al.  Social interactions in HRI: the robot view , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[18]  Manuel Lopes,et al.  Learning Object Affordances: From Sensory--Motor Coordination to Imitation , 2008, IEEE Transactions on Robotics.

[19]  Kerstin Dautenhahn,et al.  Self-Imitation and Environmental Scaffolding for Robot Teaching , 2007 .

[20]  Sinan Kalkan,et al.  Learning Affordances for Categorizing Objects and Their Properties , 2010, 2010 20th International Conference on Pattern Recognition.

[21]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[22]  Jutta Weber Human-Robot Interaction , 2008, Handbook of Research on Computer Mediated Communication.

[23]  Paul A. Kirschner,et al.  The social affordances of computer-supported collaborative learning environments , 2001, 31st Annual Frontiers in Education Conference. Impact on Engineering and Science Education. Conference Proceedings (Cat. No.01CH37193).

[24]  Alexander Stoytchev,et al.  Learning the Affordances of Tools Using a Behavior-Grounded Approach , 2006, Towards Affordance-Based Robot Control.

[25]  Maya Cakmak,et al.  Exploiting social partners in robot learning , 2010, Auton. Robots.

[26]  Cynthia Breazeal,et al.  A reasoning architecture for human-robot joint tasks using physics-,social-, and capability-based logic , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[27]  Michael J. Richardson,et al.  Judging and actualizing intrapersonal and interpersonal affordances. , 2007, Journal of experimental psychology. Human perception and performance.

[28]  E. Reed The Ecological Approach to Visual Perception , 1989 .

[29]  Emre Ugur,et al.  Affordance learning from range data for multi-step planning , 2009, EpiRob.